Test Report: QEMU_macOS 19649

                    
                      32fce3c1cb58db02ee1cd4b36165a584c8a30f83:2024-09-16:36244
                    
                

Test fail (99/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.49
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.26
22 TestOffline 10.02
33 TestAddons/parallel/Registry 71.26
46 TestCertOptions 10.23
47 TestCertExpiration 195.27
48 TestDockerFlags 10.19
49 TestForceSystemdFlag 10.26
50 TestForceSystemdEnv 10.39
95 TestFunctional/parallel/ServiceCmdConnect 41.6
167 TestMultiControlPlane/serial/StopSecondaryNode 115.95
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 77.86
169 TestMultiControlPlane/serial/RestartSecondaryNode 110.66
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 136.27
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 103.93
175 TestMultiControlPlane/serial/RestartCluster 5.25
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.22
184 TestJSONOutput/start/Command 9.97
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.06
213 TestMinikubeProfile 10.2
216 TestMountStart/serial/StartWithMountFirst 10.09
219 TestMultiNode/serial/FreshStart2Nodes 10.02
220 TestMultiNode/serial/DeployApp2Nodes 105.57
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 46.46
228 TestMultiNode/serial/RestartKeepsNodes 8.65
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 3.45
231 TestMultiNode/serial/RestartMultiNode 5.25
232 TestMultiNode/serial/ValidateNameConflict 20.1
236 TestPreload 10.08
238 TestScheduledStopUnix 10.1
239 TestSkaffold 12.42
242 TestRunningBinaryUpgrade 598.35
244 TestKubernetesUpgrade 18.46
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.59
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.14
260 TestStoppedBinaryUpgrade/Upgrade 573.24
262 TestPause/serial/Start 10.05
272 TestNoKubernetes/serial/StartWithK8s 10
273 TestNoKubernetes/serial/StartWithStopK8s 5.29
274 TestNoKubernetes/serial/Start 5.29
278 TestNoKubernetes/serial/StartNoArgs 5.3
280 TestNetworkPlugins/group/auto/Start 9.93
281 TestNetworkPlugins/group/kindnet/Start 9.88
282 TestNetworkPlugins/group/calico/Start 9.8
283 TestNetworkPlugins/group/custom-flannel/Start 10.02
284 TestNetworkPlugins/group/false/Start 10.12
285 TestNetworkPlugins/group/enable-default-cni/Start 9.86
286 TestNetworkPlugins/group/flannel/Start 9.87
287 TestNetworkPlugins/group/bridge/Start 9.83
288 TestNetworkPlugins/group/kubenet/Start 9.89
291 TestStartStop/group/old-k8s-version/serial/FirstStart 10.19
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 9.95
304 TestStartStop/group/embed-certs/serial/FirstStart 11.34
305 TestStartStop/group/no-preload/serial/DeployApp 0.1
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.16
309 TestStartStop/group/no-preload/serial/SecondStart 5.95
310 TestStartStop/group/embed-certs/serial/DeployApp 0.1
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.09
315 TestStartStop/group/no-preload/serial/Pause 0.12
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.09
320 TestStartStop/group/embed-certs/serial/SecondStart 5.78
321 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
322 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
323 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
324 TestStartStop/group/embed-certs/serial/Pause 0.11
326 TestStartStop/group/newest-cni/serial/FirstStart 11.67
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.14
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.74
334 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
335 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
340 TestStartStop/group/newest-cni/serial/SecondStart 5.25
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (14.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-863000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-863000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.490383584s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fc7d34cd-3943-4c6a-acc1-ceb868fb3c08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-863000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a6a8491-6e8e-44fb-acac-1ae5c2a1fbc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19649"}}
	{"specversion":"1.0","id":"262ffadc-63cf-4a43-8713-e06353d947b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig"}}
	{"specversion":"1.0","id":"c835af25-8895-4122-9ac5-2aca8c33b5ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"0deee2d0-50cb-463f-879f-cdfbe630b453","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"89a69585-4736-43a8-b361-6ac7e7c73975","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube"}}
	{"specversion":"1.0","id":"415eef18-882a-46f9-b719-e594efc0d7c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"6b899a0e-b1cc-4070-bd82-c58bbe8a476f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c569205-bc3f-4e79-93de-37c6b9fd1b52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"b5c43cd9-0bcd-426a-99fc-ee43a0e8fb76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb2a8828-8b38-4e82-b586-0abafef57463","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-863000\" primary control-plane node in \"download-only-863000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d8f71f9-3268-4686-af62-83bd66c011e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"489e7448-03b2-4681-9930-ed7a12d79de5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19649-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109001780 0x109001780 0x109001780 0x109001780 0x109001780 0x109001780 0x109001780] Decompressors:map[bz2:0x140007021b0 gz:0x140007021b8 tar:0x14000702150 tar.bz2:0x14000702160 tar.gz:0x14000702170 tar.xz:0x14000702190 tar.zst:0x140007021a0 tbz2:0x14000702160 tgz:0x140
00702170 txz:0x14000702190 tzst:0x140007021a0 xz:0x14000702200 zip:0x14000702210 zst:0x14000702208] Getters:map[file:0x14000802790 http:0x140004a2190 https:0x140004a2320] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"75601dce-1459-449a-9a68-b10f3cfb2c1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:04:21.999490    1453 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:04:21.999659    1453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:04:21.999662    1453 out.go:358] Setting ErrFile to fd 2...
	I0916 10:04:21.999664    1453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:04:21.999784    1453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	W0916 10:04:21.999883    1453 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19649-964/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19649-964/.minikube/config/config.json: no such file or directory
	I0916 10:04:22.001143    1453 out.go:352] Setting JSON to true
	I0916 10:04:22.019332    1453 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":226,"bootTime":1726506036,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:04:22.019392    1453 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:04:22.022213    1453 out.go:97] [download-only-863000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:04:22.022361    1453 notify.go:220] Checking for updates...
	W0916 10:04:22.022416    1453 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:04:22.027030    1453 out.go:169] MINIKUBE_LOCATION=19649
	I0916 10:04:22.032071    1453 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:04:22.035070    1453 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:04:22.039055    1453 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:04:22.042117    1453 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	W0916 10:04:22.048047    1453 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 10:04:22.048267    1453 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:04:22.054071    1453 out.go:97] Using the qemu2 driver based on user configuration
	I0916 10:04:22.054092    1453 start.go:297] selected driver: qemu2
	I0916 10:04:22.054105    1453 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:04:22.054184    1453 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:04:22.058069    1453 out.go:169] Automatically selected the socket_vmnet network
	I0916 10:04:22.063733    1453 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0916 10:04:22.063815    1453 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:04:22.063856    1453 cni.go:84] Creating CNI manager for ""
	I0916 10:04:22.063885    1453 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 10:04:22.063932    1453 start.go:340] cluster config:
	{Name:download-only-863000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-863000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:04:22.069421    1453 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:04:22.072114    1453 out.go:97] Downloading VM boot image ...
	I0916 10:04:22.072130    1453 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso
	I0916 10:04:29.889012    1453 out.go:97] Starting "download-only-863000" primary control-plane node in "download-only-863000" cluster
	I0916 10:04:29.889035    1453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:04:29.952153    1453 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 10:04:29.952178    1453 cache.go:56] Caching tarball of preloaded images
	I0916 10:04:29.952371    1453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:04:29.956605    1453 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 10:04:29.956612    1453 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 10:04:30.033551    1453 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 10:04:35.207493    1453 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 10:04:35.207655    1453 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 10:04:35.903452    1453 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0916 10:04:35.903671    1453 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/download-only-863000/config.json ...
	I0916 10:04:35.903691    1453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/download-only-863000/config.json: {Name:mk2e69a70769f5ad88b914cb7686bf971e95ba03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:04:35.903939    1453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:04:35.904134    1453 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0916 10:04:36.412159    1453 out.go:193] 
	W0916 10:04:36.418100    1453 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19649-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109001780 0x109001780 0x109001780 0x109001780 0x109001780 0x109001780 0x109001780] Decompressors:map[bz2:0x140007021b0 gz:0x140007021b8 tar:0x14000702150 tar.bz2:0x14000702160 tar.gz:0x14000702170 tar.xz:0x14000702190 tar.zst:0x140007021a0 tbz2:0x14000702160 tgz:0x14000702170 txz:0x14000702190 tzst:0x140007021a0 xz:0x14000702200 zip:0x14000702210 zst:0x14000702208] Getters:map[file:0x14000802790 http:0x140004a2190 https:0x140004a2320] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0916 10:04:36.418126    1453 out_reason.go:110] 
	W0916 10:04:36.429057    1453 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:04:36.432969    1453 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-863000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (14.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19649-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.26s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-752000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-752000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 : exit status 40 (160.9805ms)

                                                
                                                
-- stdout --
	* [binary-mirror-752000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-752000" primary control-plane node in "binary-mirror-752000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:04:43.452131    1516 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:04:43.452257    1516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:04:43.452260    1516 out.go:358] Setting ErrFile to fd 2...
	I0916 10:04:43.452263    1516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:04:43.452375    1516 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:04:43.453434    1516 out.go:352] Setting JSON to false
	I0916 10:04:43.469554    1516 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":247,"bootTime":1726506036,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:04:43.469644    1516 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:04:43.474965    1516 out.go:177] * [binary-mirror-752000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:04:43.481919    1516 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:04:43.481977    1516 notify.go:220] Checking for updates...
	I0916 10:04:43.488846    1516 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:04:43.491883    1516 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:04:43.494902    1516 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:04:43.497836    1516 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:04:43.501058    1516 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:04:43.504923    1516 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:04:43.511875    1516 start.go:297] selected driver: qemu2
	I0916 10:04:43.511884    1516 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:04:43.511964    1516 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:04:43.514908    1516 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:04:43.520077    1516 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0916 10:04:43.520163    1516 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:04:43.520183    1516 cni.go:84] Creating CNI manager for ""
	I0916 10:04:43.520207    1516 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:04:43.520215    1516 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:04:43.520254    1516 start.go:340] cluster config:
	{Name:binary-mirror-752000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-752000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49310 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:04:43.523842    1516 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:04:43.530866    1516 out.go:177] * Starting "binary-mirror-752000" primary control-plane node in "binary-mirror-752000" cluster
	I0916 10:04:43.534889    1516 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:04:43.534906    1516 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:04:43.534921    1516 cache.go:56] Caching tarball of preloaded images
	I0916 10:04:43.535001    1516 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:04:43.535007    1516 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:04:43.535207    1516 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/binary-mirror-752000/config.json ...
	I0916 10:04:43.535218    1516 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/binary-mirror-752000/config.json: {Name:mkf9c09e59f56aa85f25ec859bebe91c9adf46fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:04:43.535590    1516 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:04:43.535641    1516 download.go:107] Downloading: http://127.0.0.1:49310/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49310/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0916 10:04:43.562065    1516 out.go:201] 
	W0916 10:04:43.565914    1516 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49310/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49310/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49310/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49310/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19649-964/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a2d780 0x108a2d780 0x108a2d780 0x108a2d780 0x108a2d780 0x108a2d780 0x108a2d780] Decompressors:map[bz2:0x1400055cf90 gz:0x1400055cf98 tar:0x1400055cf30 tar.bz2:0x1400055cf50 tar.gz:0x1400055cf60 tar.xz:0x1400055cf70 tar.zst:0x1400055cf80 tbz2:0x1400055cf50 tgz:0x1400055cf60 txz:0x1400055cf70 tzst:0x1400055cf80 xz:0x1400055cfa0 zip:0x1400055cfb0 zst:0x1400055cfa8] Getters:map[file:0x14001428a20 http:0x14000540a00 https:0x14000540a50] Dir:f
alse ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49310/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49310/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49310/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49310/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19649-964/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a2d780 0x108a2d780 0x108a2d780 0x108a2d780 0x108a2d780 0x108a2d780 0x108a2d780] Decompressors:map[bz2:0x1400055cf90 gz:0x1400055cf98 tar:0x1400055cf30 tar.bz2:0x1400055cf50 tar.gz:0x1400055cf60 tar.xz:0x1400055cf70 tar.zst:0x1400055cf80 tbz2:0x1400055cf50 tgz:0x1400055cf60 txz:0x1400055cf70 tzst:0x1400055cf80 xz:0x1400055cfa0 zip:0x1400055cfb0 zst:0x1400055cfa8] Getters:map[file:0x14001428a20 http:0x14000540a00 https:0x14000540a50] Dir:false ProgressListener:<nil> Insecure:false
DisableSymlinks:false Options:[]}: unexpected EOF
	W0916 10:04:43.565926    1516 out.go:270] * 
	* 
	W0916 10:04:43.566419    1516 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:04:43.580847    1516 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-752000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49310" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-752000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-752000
--- FAIL: TestBinaryMirror (0.26s)

                                                
                                    
x
+
TestOffline (10.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-905000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-905000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.865010834s)

                                                
                                                
-- stdout --
	* [offline-docker-905000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-905000" primary control-plane node in "offline-docker-905000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-905000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:41:25.430241    3730 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:41:25.430365    3730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:41:25.430369    3730 out.go:358] Setting ErrFile to fd 2...
	I0916 10:41:25.430371    3730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:41:25.430510    3730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:41:25.431713    3730 out.go:352] Setting JSON to false
	I0916 10:41:25.449188    3730 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2449,"bootTime":1726506036,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:41:25.449258    3730 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:41:25.455011    3730 out.go:177] * [offline-docker-905000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:41:25.461934    3730 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:41:25.461940    3730 notify.go:220] Checking for updates...
	I0916 10:41:25.469833    3730 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:41:25.472906    3730 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:41:25.475774    3730 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:41:25.478834    3730 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:41:25.481866    3730 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:41:25.485150    3730 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:41:25.485217    3730 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:41:25.488841    3730 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:41:25.495840    3730 start.go:297] selected driver: qemu2
	I0916 10:41:25.495851    3730 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:41:25.495859    3730 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:41:25.497768    3730 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:41:25.500814    3730 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:41:25.503937    3730 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:41:25.503954    3730 cni.go:84] Creating CNI manager for ""
	I0916 10:41:25.503977    3730 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:41:25.503981    3730 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:41:25.504016    3730 start.go:340] cluster config:
	{Name:offline-docker-905000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-905000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:41:25.507591    3730 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:41:25.514862    3730 out.go:177] * Starting "offline-docker-905000" primary control-plane node in "offline-docker-905000" cluster
	I0916 10:41:25.518782    3730 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:41:25.518809    3730 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:41:25.518820    3730 cache.go:56] Caching tarball of preloaded images
	I0916 10:41:25.518896    3730 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:41:25.518901    3730 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:41:25.518973    3730 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/offline-docker-905000/config.json ...
	I0916 10:41:25.518987    3730 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/offline-docker-905000/config.json: {Name:mkdec4b1e01710be988d0b682ef4fc27b18ec768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:25.519298    3730 start.go:360] acquireMachinesLock for offline-docker-905000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:41:25.519328    3730 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "offline-docker-905000"
	I0916 10:41:25.519338    3730 start.go:93] Provisioning new machine with config: &{Name:offline-docker-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-905000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:41:25.519370    3730 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:41:25.523801    3730 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 10:41:25.539693    3730 start.go:159] libmachine.API.Create for "offline-docker-905000" (driver="qemu2")
	I0916 10:41:25.539726    3730 client.go:168] LocalClient.Create starting
	I0916 10:41:25.539801    3730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:41:25.539831    3730 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:25.539840    3730 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:25.539885    3730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:41:25.539908    3730 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:25.539916    3730 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:25.540281    3730 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:41:25.700327    3730 main.go:141] libmachine: Creating SSH key...
	I0916 10:41:25.780491    3730 main.go:141] libmachine: Creating Disk image...
	I0916 10:41:25.780505    3730 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:41:25.780714    3730 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/disk.qcow2
	I0916 10:41:25.797423    3730 main.go:141] libmachine: STDOUT: 
	I0916 10:41:25.797443    3730 main.go:141] libmachine: STDERR: 
	I0916 10:41:25.797502    3730 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/disk.qcow2 +20000M
	I0916 10:41:25.805857    3730 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:41:25.805876    3730 main.go:141] libmachine: STDERR: 
	I0916 10:41:25.805899    3730 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/disk.qcow2
	I0916 10:41:25.805904    3730 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:41:25.805919    3730 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:41:25.805946    3730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:ff:dd:99:55:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/disk.qcow2
	I0916 10:41:25.807736    3730 main.go:141] libmachine: STDOUT: 
	I0916 10:41:25.807761    3730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:41:25.807783    3730 client.go:171] duration metric: took 268.055667ms to LocalClient.Create
	I0916 10:41:27.809813    3730 start.go:128] duration metric: took 2.2904895s to createHost
	I0916 10:41:27.809833    3730 start.go:83] releasing machines lock for "offline-docker-905000", held for 2.29055375s
	W0916 10:41:27.809846    3730 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:41:27.846413    3730 out.go:177] * Deleting "offline-docker-905000" in qemu2 ...
	W0916 10:41:27.865292    3730 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:41:27.865303    3730 start.go:729] Will try again in 5 seconds ...
	I0916 10:41:32.867461    3730 start.go:360] acquireMachinesLock for offline-docker-905000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:41:32.868098    3730 start.go:364] duration metric: took 488.208µs to acquireMachinesLock for "offline-docker-905000"
	I0916 10:41:32.868269    3730 start.go:93] Provisioning new machine with config: &{Name:offline-docker-905000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-905000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:41:32.868598    3730 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:41:32.878264    3730 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 10:41:32.929419    3730 start.go:159] libmachine.API.Create for "offline-docker-905000" (driver="qemu2")
	I0916 10:41:32.929467    3730 client.go:168] LocalClient.Create starting
	I0916 10:41:32.929607    3730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:41:32.929667    3730 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:32.929685    3730 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:32.929756    3730 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:41:32.929804    3730 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:32.929821    3730 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:32.930357    3730 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:41:33.099643    3730 main.go:141] libmachine: Creating SSH key...
	I0916 10:41:33.192788    3730 main.go:141] libmachine: Creating Disk image...
	I0916 10:41:33.192793    3730 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:41:33.192975    3730 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/disk.qcow2
	I0916 10:41:33.202457    3730 main.go:141] libmachine: STDOUT: 
	I0916 10:41:33.202473    3730 main.go:141] libmachine: STDERR: 
	I0916 10:41:33.202535    3730 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/disk.qcow2 +20000M
	I0916 10:41:33.210422    3730 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:41:33.210440    3730 main.go:141] libmachine: STDERR: 
	I0916 10:41:33.210456    3730 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/disk.qcow2
	I0916 10:41:33.210460    3730 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:41:33.210473    3730 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:41:33.210504    3730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:dd:a3:ac:a1:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/offline-docker-905000/disk.qcow2
	I0916 10:41:33.212100    3730 main.go:141] libmachine: STDOUT: 
	I0916 10:41:33.212115    3730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:41:33.212127    3730 client.go:171] duration metric: took 282.661334ms to LocalClient.Create
	I0916 10:41:35.214277    3730 start.go:128] duration metric: took 2.345697s to createHost
	I0916 10:41:35.214345    3730 start.go:83] releasing machines lock for "offline-docker-905000", held for 2.346251625s
	W0916 10:41:35.214758    3730 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-905000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-905000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:41:35.230404    3730 out.go:201] 
	W0916 10:41:35.234504    3730 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:41:35.234536    3730 out.go:270] * 
	* 
	W0916 10:41:35.237275    3730 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:41:35.253434    3730 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-905000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-16 10:41:35.267181 -0700 PDT m=+2233.413507668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-905000 -n offline-docker-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-905000 -n offline-docker-905000: exit status 7 (65.827417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-905000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-905000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-905000
--- FAIL: TestOffline (10.02s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.3505ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-z5m97" [aaeea89d-dbf1-40ff-8089-7095e3cd9e2a] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009210959s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lmvtl" [00468330-2389-470c-9e96-e57cad540e47] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005501875s
addons_test.go:342: (dbg) Run:  kubectl --context addons-138000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-138000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-138000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.052157333s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-138000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 ip
2024/09/16 10:16:54 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-138000 -n addons-138000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-863000 | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT |                     |
	|         | -p download-only-863000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT | 16 Sep 24 10:04 PDT |
	| delete  | -p download-only-863000              | download-only-863000 | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT | 16 Sep 24 10:04 PDT |
	| start   | -o=json --download-only              | download-only-699000 | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT |                     |
	|         | -p download-only-699000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT | 16 Sep 24 10:04 PDT |
	| delete  | -p download-only-699000              | download-only-699000 | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT | 16 Sep 24 10:04 PDT |
	| delete  | -p download-only-863000              | download-only-863000 | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT | 16 Sep 24 10:04 PDT |
	| delete  | -p download-only-699000              | download-only-699000 | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT | 16 Sep 24 10:04 PDT |
	| start   | --download-only -p                   | binary-mirror-752000 | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT |                     |
	|         | binary-mirror-752000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-752000              | binary-mirror-752000 | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT | 16 Sep 24 10:04 PDT |
	| addons  | disable dashboard -p                 | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT |                     |
	|         | addons-138000                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT |                     |
	|         | addons-138000                        |                      |         |         |                     |                     |
	| start   | -p addons-138000 --wait=true         | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT | 16 Sep 24 10:07 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-138000 addons disable         | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:07 PDT | 16 Sep 24 10:07 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-138000 addons                 | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:16 PDT | 16 Sep 24 10:16 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-138000 addons                 | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:16 PDT | 16 Sep 24 10:16 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-138000 addons                 | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:16 PDT | 16 Sep 24 10:16 PDT |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:16 PDT | 16 Sep 24 10:16 PDT |
	|         | addons-138000                        |                      |         |         |                     |                     |
	| ssh     | addons-138000 ssh curl -s            | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:16 PDT | 16 Sep 24 10:16 PDT |
	|         | http://127.0.0.1/ -H 'Host:          |                      |         |         |                     |                     |
	|         | nginx.example.com'                   |                      |         |         |                     |                     |
	| ip      | addons-138000 ip                     | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:16 PDT | 16 Sep 24 10:16 PDT |
	| addons  | addons-138000 addons disable         | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:16 PDT | 16 Sep 24 10:16 PDT |
	|         | ingress-dns --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-138000 addons disable         | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:16 PDT |                     |
	|         | ingress --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| ip      | addons-138000 ip                     | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:16 PDT | 16 Sep 24 10:16 PDT |
	| addons  | addons-138000 addons disable         | addons-138000        | jenkins | v1.34.0 | 16 Sep 24 10:16 PDT | 16 Sep 24 10:16 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:04:43
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:04:43.752840    1530 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:04:43.752960    1530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:04:43.752967    1530 out.go:358] Setting ErrFile to fd 2...
	I0916 10:04:43.752970    1530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:04:43.753103    1530 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:04:43.754178    1530 out.go:352] Setting JSON to false
	I0916 10:04:43.770315    1530 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":247,"bootTime":1726506036,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:04:43.770375    1530 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:04:43.774078    1530 out.go:177] * [addons-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:04:43.780889    1530 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:04:43.780989    1530 notify.go:220] Checking for updates...
	I0916 10:04:43.787830    1530 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:04:43.790850    1530 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:04:43.793947    1530 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:04:43.796810    1530 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:04:43.799875    1530 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:04:43.803065    1530 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:04:43.806785    1530 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:04:43.813900    1530 start.go:297] selected driver: qemu2
	I0916 10:04:43.813907    1530 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:04:43.813915    1530 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:04:43.816213    1530 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:04:43.819858    1530 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:04:43.822952    1530 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:04:43.822979    1530 cni.go:84] Creating CNI manager for ""
	I0916 10:04:43.823001    1530 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:04:43.823009    1530 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:04:43.823045    1530 start.go:340] cluster config:
	{Name:addons-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:04:43.826728    1530 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:04:43.834821    1530 out.go:177] * Starting "addons-138000" primary control-plane node in "addons-138000" cluster
	I0916 10:04:43.838828    1530 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:04:43.838848    1530 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:04:43.838864    1530 cache.go:56] Caching tarball of preloaded images
	I0916 10:04:43.838944    1530 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:04:43.838950    1530 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:04:43.839218    1530 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/config.json ...
	I0916 10:04:43.839230    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/config.json: {Name:mk5ca911add2f92996015350a5b14639e6b8ab2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:04:43.839669    1530 start.go:360] acquireMachinesLock for addons-138000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:04:43.839737    1530 start.go:364] duration metric: took 62.125µs to acquireMachinesLock for "addons-138000"
	I0916 10:04:43.839748    1530 start.go:93] Provisioning new machine with config: &{Name:addons-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:04:43.839775    1530 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:04:43.847847    1530 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0916 10:04:44.064726    1530 start.go:159] libmachine.API.Create for "addons-138000" (driver="qemu2")
	I0916 10:04:44.064763    1530 client.go:168] LocalClient.Create starting
	I0916 10:04:44.064909    1530 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:04:44.280535    1530 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:04:44.433789    1530 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:04:44.835079    1530 main.go:141] libmachine: Creating SSH key...
	I0916 10:04:44.919898    1530 main.go:141] libmachine: Creating Disk image...
	I0916 10:04:44.919908    1530 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:04:44.920129    1530 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/disk.qcow2
	I0916 10:04:44.981377    1530 main.go:141] libmachine: STDOUT: 
	I0916 10:04:44.981401    1530 main.go:141] libmachine: STDERR: 
	I0916 10:04:44.981470    1530 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/disk.qcow2 +20000M
	I0916 10:04:44.989766    1530 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:04:44.989781    1530 main.go:141] libmachine: STDERR: 
	I0916 10:04:44.989796    1530 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/disk.qcow2
	I0916 10:04:44.989800    1530 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:04:44.989839    1530 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:04:44.989882    1530 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:15:70:07:43:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/disk.qcow2
	I0916 10:04:45.119068    1530 main.go:141] libmachine: STDOUT: 
	I0916 10:04:45.119101    1530 main.go:141] libmachine: STDERR: 
	I0916 10:04:45.119106    1530 main.go:141] libmachine: Attempt 0
	I0916 10:04:45.119123    1530 main.go:141] libmachine: Searching for 86:15:70:7:43:dc in /var/db/dhcpd_leases ...
	I0916 10:04:45.119180    1530 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 10:04:45.119200    1530 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e9b5c0}
	I0916 10:04:47.120327    1530 main.go:141] libmachine: Attempt 1
	I0916 10:04:47.120416    1530 main.go:141] libmachine: Searching for 86:15:70:7:43:dc in /var/db/dhcpd_leases ...
	I0916 10:04:47.120769    1530 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 10:04:47.120819    1530 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e9b5c0}
	I0916 10:04:49.122081    1530 main.go:141] libmachine: Attempt 2
	I0916 10:04:49.122248    1530 main.go:141] libmachine: Searching for 86:15:70:7:43:dc in /var/db/dhcpd_leases ...
	I0916 10:04:49.122617    1530 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 10:04:49.122667    1530 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e9b5c0}
	I0916 10:04:51.123835    1530 main.go:141] libmachine: Attempt 3
	I0916 10:04:51.123869    1530 main.go:141] libmachine: Searching for 86:15:70:7:43:dc in /var/db/dhcpd_leases ...
	I0916 10:04:51.123958    1530 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 10:04:51.123974    1530 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e9b5c0}
	I0916 10:04:53.125056    1530 main.go:141] libmachine: Attempt 4
	I0916 10:04:53.125090    1530 main.go:141] libmachine: Searching for 86:15:70:7:43:dc in /var/db/dhcpd_leases ...
	I0916 10:04:53.125155    1530 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 10:04:53.125169    1530 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e9b5c0}
	I0916 10:04:55.126202    1530 main.go:141] libmachine: Attempt 5
	I0916 10:04:55.126220    1530 main.go:141] libmachine: Searching for 86:15:70:7:43:dc in /var/db/dhcpd_leases ...
	I0916 10:04:55.126276    1530 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 10:04:55.126286    1530 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e9b5c0}
	I0916 10:04:57.127311    1530 main.go:141] libmachine: Attempt 6
	I0916 10:04:57.127329    1530 main.go:141] libmachine: Searching for 86:15:70:7:43:dc in /var/db/dhcpd_leases ...
	I0916 10:04:57.127385    1530 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0916 10:04:57.127394    1530 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e9b5c0}
	I0916 10:04:59.128405    1530 main.go:141] libmachine: Attempt 7
	I0916 10:04:59.128431    1530 main.go:141] libmachine: Searching for 86:15:70:7:43:dc in /var/db/dhcpd_leases ...
	I0916 10:04:59.128528    1530 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0916 10:04:59.128539    1530 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:86:15:70:7:43:dc ID:1,86:15:70:7:43:dc Lease:0x66e9b6b9}
	I0916 10:04:59.128542    1530 main.go:141] libmachine: Found match: 86:15:70:7:43:dc
	I0916 10:04:59.128550    1530 main.go:141] libmachine: IP: 192.168.105.2
	I0916 10:04:59.128554    1530 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0916 10:05:01.146571    1530 machine.go:93] provisionDockerMachine start ...
	I0916 10:05:01.148088    1530 main.go:141] libmachine: Using SSH client type: native
	I0916 10:05:01.148549    1530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a99190] 0x102a9b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 10:05:01.148565    1530 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:05:01.220421    1530 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0916 10:05:01.220453    1530 buildroot.go:166] provisioning hostname "addons-138000"
	I0916 10:05:01.220608    1530 main.go:141] libmachine: Using SSH client type: native
	I0916 10:05:01.220870    1530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a99190] 0x102a9b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 10:05:01.220880    1530 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-138000 && echo "addons-138000" | sudo tee /etc/hostname
	I0916 10:05:01.286966    1530 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-138000
	
	I0916 10:05:01.287066    1530 main.go:141] libmachine: Using SSH client type: native
	I0916 10:05:01.287233    1530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a99190] 0x102a9b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 10:05:01.287248    1530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-138000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-138000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-138000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:05:01.341381    1530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:05:01.341398    1530 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19649-964/.minikube CaCertPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19649-964/.minikube}
	I0916 10:05:01.341418    1530 buildroot.go:174] setting up certificates
	I0916 10:05:01.341425    1530 provision.go:84] configureAuth start
	I0916 10:05:01.341429    1530 provision.go:143] copyHostCerts
	I0916 10:05:01.341559    1530 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19649-964/.minikube/ca.pem (1082 bytes)
	I0916 10:05:01.341791    1530 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19649-964/.minikube/cert.pem (1123 bytes)
	I0916 10:05:01.341910    1530 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19649-964/.minikube/key.pem (1679 bytes)
	I0916 10:05:01.341989    1530 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19649-964/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca-key.pem org=jenkins.addons-138000 san=[127.0.0.1 192.168.105.2 addons-138000 localhost minikube]
	I0916 10:05:01.529115    1530 provision.go:177] copyRemoteCerts
	I0916 10:05:01.529176    1530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:05:01.529195    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:01.556323    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:05:01.564421    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:05:01.572592    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:05:01.580594    1530 provision.go:87] duration metric: took 239.168083ms to configureAuth
	I0916 10:05:01.580603    1530 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:05:01.580721    1530 config.go:182] Loaded profile config "addons-138000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:05:01.580763    1530 main.go:141] libmachine: Using SSH client type: native
	I0916 10:05:01.580846    1530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a99190] 0x102a9b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 10:05:01.580851    1530 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 10:05:01.630165    1530 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 10:05:01.630174    1530 buildroot.go:70] root file system type: tmpfs
	I0916 10:05:01.630226    1530 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 10:05:01.630296    1530 main.go:141] libmachine: Using SSH client type: native
	I0916 10:05:01.630395    1530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a99190] 0x102a9b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 10:05:01.630426    1530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 10:05:01.682854    1530 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 10:05:01.682907    1530 main.go:141] libmachine: Using SSH client type: native
	I0916 10:05:01.683057    1530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a99190] 0x102a9b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 10:05:01.683069    1530 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 10:05:03.060556    1530 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 10:05:03.060572    1530 machine.go:96] duration metric: took 1.914008292s to provisionDockerMachine
	I0916 10:05:03.060578    1530 client.go:171] duration metric: took 18.996141667s to LocalClient.Create
	I0916 10:05:03.060592    1530 start.go:167] duration metric: took 18.996199417s to libmachine.API.Create "addons-138000"
	I0916 10:05:03.060596    1530 start.go:293] postStartSetup for "addons-138000" (driver="qemu2")
	I0916 10:05:03.060602    1530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:05:03.060695    1530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:05:03.060709    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:03.089192    1530 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:05:03.090630    1530 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 10:05:03.090638    1530 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19649-964/.minikube/addons for local assets ...
	I0916 10:05:03.090718    1530 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19649-964/.minikube/files for local assets ...
	I0916 10:05:03.090752    1530 start.go:296] duration metric: took 30.152708ms for postStartSetup
	I0916 10:05:03.091173    1530 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/config.json ...
	I0916 10:05:03.091362    1530 start.go:128] duration metric: took 19.251919542s to createHost
	I0916 10:05:03.091387    1530 main.go:141] libmachine: Using SSH client type: native
	I0916 10:05:03.091476    1530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a99190] 0x102a9b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0916 10:05:03.091481    1530 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:05:03.138181    1530 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726506303.110999503
	
	I0916 10:05:03.138190    1530 fix.go:216] guest clock: 1726506303.110999503
	I0916 10:05:03.138195    1530 fix.go:229] Guest: 2024-09-16 10:05:03.110999503 -0700 PDT Remote: 2024-09-16 10:05:03.091365 -0700 PDT m=+19.358083001 (delta=19.634503ms)
	I0916 10:05:03.138221    1530 fix.go:200] guest clock delta is within tolerance: 19.634503ms
	I0916 10:05:03.138224    1530 start.go:83] releasing machines lock for "addons-138000", held for 19.298818625s
	I0916 10:05:03.138547    1530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:05:03.138547    1530 ssh_runner.go:195] Run: cat /version.json
	I0916 10:05:03.138588    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:03.138588    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:03.206844    1530 ssh_runner.go:195] Run: systemctl --version
	I0916 10:05:03.209087    1530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:05:03.211134    1530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:05:03.211167    1530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:05:03.217095    1530 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:05:03.217102    1530 start.go:495] detecting cgroup driver to use...
	I0916 10:05:03.217227    1530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:05:03.223441    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:05:03.226978    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:05:03.230680    1530 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:05:03.230710    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:05:03.234553    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:05:03.238579    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:05:03.242457    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:05:03.246254    1530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:05:03.250179    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:05:03.254085    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:05:03.258021    1530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:05:03.262223    1530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:05:03.265868    1530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:05:03.269627    1530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:05:03.338295    1530 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:05:03.349286    1530 start.go:495] detecting cgroup driver to use...
	I0916 10:05:03.349360    1530 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 10:05:03.355745    1530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:05:03.360867    1530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:05:03.367828    1530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:05:03.372751    1530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:05:03.377877    1530 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:05:03.421223    1530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:05:03.427707    1530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:05:03.434288    1530 ssh_runner.go:195] Run: which cri-dockerd
	I0916 10:05:03.435810    1530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:05:03.439010    1530 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 10:05:03.445016    1530 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 10:05:03.531515    1530 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 10:05:03.619211    1530 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:05:03.619262    1530 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:05:03.625422    1530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:05:03.713013    1530 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 10:05:05.901061    1530 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.18806825s)
	I0916 10:05:05.901142    1530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:05:05.906675    1530 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0916 10:05:05.913976    1530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:05:05.919550    1530 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:05:05.982222    1530 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 10:05:06.064004    1530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:05:06.146354    1530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 10:05:06.153517    1530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:05:06.158930    1530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:05:06.230286    1530 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 10:05:06.257331    1530 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:05:06.257433    1530 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 10:05:06.259575    1530 start.go:563] Will wait 60s for crictl version
	I0916 10:05:06.259619    1530 ssh_runner.go:195] Run: which crictl
	I0916 10:05:06.260854    1530 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:05:06.276186    1530 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 10:05:06.276261    1530 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 10:05:06.285950    1530 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 10:05:06.297520    1530 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 10:05:06.297656    1530 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0916 10:05:06.299037    1530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:05:06.303101    1530 kubeadm.go:883] updating cluster {Name:addons-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:05:06.303158    1530 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:05:06.303215    1530 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 10:05:06.307967    1530 docker.go:685] Got preloaded images: 
	I0916 10:05:06.307977    1530 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0916 10:05:06.308018    1530 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 10:05:06.311487    1530 ssh_runner.go:195] Run: which lz4
	I0916 10:05:06.312944    1530 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:05:06.314295    1530 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:05:06.314306    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0916 10:05:07.575795    1530 docker.go:649] duration metric: took 1.2629155s to copy over tarball
	I0916 10:05:07.575862    1530 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:05:08.516354    1530 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:05:08.531084    1530 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 10:05:08.534769    1530 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0916 10:05:08.540824    1530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:05:08.616843    1530 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 10:05:10.989674    1530 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.372854792s)
	I0916 10:05:10.989799    1530 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 10:05:10.996146    1530 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 10:05:10.996156    1530 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:05:10.996176    1530 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0916 10:05:10.996248    1530 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-138000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:05:10.996320    1530 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 10:05:11.015034    1530 cni.go:84] Creating CNI manager for ""
	I0916 10:05:11.015050    1530 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:05:11.015056    1530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:05:11.015067    1530 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-138000 NodeName:addons-138000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:05:11.015138    1530 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-138000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:05:11.015201    1530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:05:11.018870    1530 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:05:11.018913    1530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:05:11.022344    1530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 10:05:11.028400    1530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:05:11.034349    1530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0916 10:05:11.040382    1530 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:05:11.041763    1530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:05:11.046081    1530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:05:11.114695    1530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:05:11.127661    1530 certs.go:68] Setting up /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000 for IP: 192.168.105.2
	I0916 10:05:11.127670    1530 certs.go:194] generating shared ca certs ...
	I0916 10:05:11.127679    1530 certs.go:226] acquiring lock for ca certs: {Name:mk95bad6e61a22ab8ae5ec5f8cd43ca9ad7a3f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:05:11.127880    1530 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19649-964/.minikube/ca.key
	I0916 10:05:11.175839    1530 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt ...
	I0916 10:05:11.175854    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt: {Name:mk979ea78377c3c68c5eeea839a226a9facc38d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:05:11.176186    1530 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19649-964/.minikube/ca.key ...
	I0916 10:05:11.176190    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/ca.key: {Name:mk8f09f09c7d20d7dd7917c5d5640f5e40ea04a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:05:11.176352    1530 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.key
	I0916 10:05:11.282738    1530 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.crt ...
	I0916 10:05:11.282743    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.crt: {Name:mk1a852948f6c7d016db6546a8c9ebdc8816490b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:05:11.282885    1530 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.key ...
	I0916 10:05:11.282888    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.key: {Name:mk722b3826624735395014e7189045a60e377ee0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:05:11.283016    1530 certs.go:256] generating profile certs ...
	I0916 10:05:11.283053    1530 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.key
	I0916 10:05:11.283059    1530 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt with IP's: []
	I0916 10:05:11.522975    1530 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt ...
	I0916 10:05:11.522992    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: {Name:mk136b8f659effc80ef2f33bf36ac89e4dcc14d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:05:11.523649    1530 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.key ...
	I0916 10:05:11.523654    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.key: {Name:mk6a110627435cd678c9ad5a809ec310abe68108 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:05:11.523789    1530 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/apiserver.key.d3ee2f0a
	I0916 10:05:11.523803    1530 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/apiserver.crt.d3ee2f0a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0916 10:05:11.600797    1530 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/apiserver.crt.d3ee2f0a ...
	I0916 10:05:11.600802    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/apiserver.crt.d3ee2f0a: {Name:mke22952802d84f8fa734ad1e130f96081f26a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:05:11.600950    1530 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/apiserver.key.d3ee2f0a ...
	I0916 10:05:11.600954    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/apiserver.key.d3ee2f0a: {Name:mk27ce536fa63715a9ca889669bbcf45dd642568 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:05:11.601085    1530 certs.go:381] copying /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/apiserver.crt.d3ee2f0a -> /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/apiserver.crt
	I0916 10:05:11.601275    1530 certs.go:385] copying /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/apiserver.key.d3ee2f0a -> /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/apiserver.key
	I0916 10:05:11.601390    1530 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/proxy-client.key
	I0916 10:05:11.601399    1530 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/proxy-client.crt with IP's: []
	I0916 10:05:11.668814    1530 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/proxy-client.crt ...
	I0916 10:05:11.668821    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/proxy-client.crt: {Name:mk3d180fc35db8eb51b5aa3c3448b821319c98d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:05:11.668961    1530 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/proxy-client.key ...
	I0916 10:05:11.668965    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/proxy-client.key: {Name:mkc45a0914622ee6155333fc0dfd2fa0cf1b07f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:05:11.669228    1530 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:05:11.669253    1530 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:05:11.669274    1530 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:05:11.669296    1530 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/key.pem (1679 bytes)
	I0916 10:05:11.669744    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:05:11.678897    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:05:11.686883    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:05:11.694831    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 10:05:11.702746    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:05:11.710662    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:05:11.718649    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:05:11.726765    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:05:11.734694    1530 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:05:11.742757    1530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:05:11.749585    1530 ssh_runner.go:195] Run: openssl version
	I0916 10:05:11.751923    1530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:05:11.760313    1530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:05:11.762008    1530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:05:11.762047    1530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:05:11.764285    1530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:05:11.768598    1530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:05:11.770224    1530 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:05:11.770270    1530 kubeadm.go:392] StartCluster: {Name:addons-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:05:11.770349    1530 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:05:11.778907    1530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:05:11.782893    1530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:05:11.786855    1530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:05:11.790637    1530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:05:11.790644    1530 kubeadm.go:157] found existing configuration files:
	
	I0916 10:05:11.790676    1530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:05:11.794084    1530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:05:11.794116    1530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:05:11.797620    1530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:05:11.800989    1530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:05:11.801016    1530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:05:11.804227    1530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:05:11.807264    1530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:05:11.807291    1530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:05:11.810535    1530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:05:11.813751    1530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:05:11.813776    1530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:05:11.817414    1530 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:05:11.840252    1530 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:05:11.840277    1530 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:05:11.879992    1530 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:05:11.880056    1530 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:05:11.880116    1530 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:05:11.884066    1530 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:05:11.890281    1530 out.go:235]   - Generating certificates and keys ...
	I0916 10:05:11.890319    1530 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:05:11.890352    1530 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:05:11.998851    1530 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:05:12.216301    1530 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:05:12.320442    1530 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:05:12.585277    1530 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:05:12.744091    1530 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:05:12.744154    1530 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-138000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0916 10:05:12.811093    1530 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:05:12.811159    1530 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-138000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0916 10:05:12.944816    1530 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:05:13.094315    1530 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:05:13.202848    1530 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:05:13.202885    1530 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:05:13.306358    1530 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:05:13.385357    1530 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:05:13.471756    1530 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:05:13.519530    1530 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:05:13.590403    1530 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:05:13.590591    1530 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:05:13.591865    1530 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:05:13.596069    1530 out.go:235]   - Booting up control plane ...
	I0916 10:05:13.596129    1530 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:05:13.596168    1530 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:05:13.596201    1530 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:05:13.600378    1530 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:05:13.602927    1530 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:05:13.602963    1530 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:05:13.684790    1530 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:05:13.684860    1530 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:05:14.186600    1530 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.852334ms
	I0916 10:05:14.186647    1530 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:05:17.692081    1530 kubeadm.go:310] [api-check] The API server is healthy after 3.504536085s
	I0916 10:05:17.716798    1530 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:05:17.726829    1530 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:05:17.741292    1530 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:05:17.741476    1530 kubeadm.go:310] [mark-control-plane] Marking the node addons-138000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:05:17.747079    1530 kubeadm.go:310] [bootstrap-token] Using token: hgqcyh.ghc7b4s6vxp82i5u
	I0916 10:05:17.759924    1530 out.go:235]   - Configuring RBAC rules ...
	I0916 10:05:17.760000    1530 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:05:17.760059    1530 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:05:17.763838    1530 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:05:17.765609    1530 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:05:17.766852    1530 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:05:17.768108    1530 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:05:18.098669    1530 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:05:18.505667    1530 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:05:19.098550    1530 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:05:19.098997    1530 kubeadm.go:310] 
	I0916 10:05:19.099038    1530 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:05:19.099044    1530 kubeadm.go:310] 
	I0916 10:05:19.099111    1530 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:05:19.099125    1530 kubeadm.go:310] 
	I0916 10:05:19.099144    1530 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:05:19.099184    1530 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:05:19.099236    1530 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:05:19.099244    1530 kubeadm.go:310] 
	I0916 10:05:19.099283    1530 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:05:19.099288    1530 kubeadm.go:310] 
	I0916 10:05:19.099318    1530 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:05:19.099322    1530 kubeadm.go:310] 
	I0916 10:05:19.099357    1530 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:05:19.099413    1530 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:05:19.099457    1530 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:05:19.099463    1530 kubeadm.go:310] 
	I0916 10:05:19.099528    1530 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:05:19.099588    1530 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:05:19.099595    1530 kubeadm.go:310] 
	I0916 10:05:19.099660    1530 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hgqcyh.ghc7b4s6vxp82i5u \
	I0916 10:05:19.099733    1530 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4cbf98c9db407bfd377513d8a979980a7165b5a1a5b1a669b5a690e8302fdda \
	I0916 10:05:19.099749    1530 kubeadm.go:310] 	--control-plane 
	I0916 10:05:19.099760    1530 kubeadm.go:310] 
	I0916 10:05:19.099822    1530 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:05:19.099829    1530 kubeadm.go:310] 
	I0916 10:05:19.099885    1530 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hgqcyh.ghc7b4s6vxp82i5u \
	I0916 10:05:19.099971    1530 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4cbf98c9db407bfd377513d8a979980a7165b5a1a5b1a669b5a690e8302fdda 
	I0916 10:05:19.100177    1530 kubeadm.go:310] W0916 17:05:11.811258    1580 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:05:19.100371    1530 kubeadm.go:310] W0916 17:05:11.811564    1580 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:05:19.100443    1530 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:05:19.100450    1530 cni.go:84] Creating CNI manager for ""
	I0916 10:05:19.100460    1530 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:05:19.105356    1530 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:05:19.109296    1530 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:05:19.113684    1530 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:05:19.119935    1530 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:05:19.119994    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:05:19.120022    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-138000 minikube.k8s.io/updated_at=2024_09_16T10_05_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=addons-138000 minikube.k8s.io/primary=true
	I0916 10:05:19.191816    1530 ops.go:34] apiserver oom_adj: -16
	I0916 10:05:19.191862    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:05:19.693009    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:05:20.192986    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:05:20.692900    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:05:21.192882    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:05:21.692935    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:05:22.192965    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:05:22.692949    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:05:23.192914    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:05:23.692196    1530 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:05:23.743229    1530 kubeadm.go:1113] duration metric: took 4.623346125s to wait for elevateKubeSystemPrivileges
	I0916 10:05:23.743245    1530 kubeadm.go:394] duration metric: took 11.973186375s to StartCluster
	I0916 10:05:23.743256    1530 settings.go:142] acquiring lock: {Name:mkcc144e0c413dd8611ee3ccbc8c08f02650f2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:05:23.743424    1530 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:05:23.743599    1530 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/kubeconfig: {Name:mk3766c19461825f7de68cf1dc4ddceadf57e288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:05:23.743833    1530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:05:23.743854    1530 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:05:23.743890    1530 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:05:23.743941    1530 addons.go:69] Setting yakd=true in profile "addons-138000"
	I0916 10:05:23.743949    1530 addons.go:234] Setting addon yakd=true in "addons-138000"
	I0916 10:05:23.743955    1530 addons.go:69] Setting metrics-server=true in profile "addons-138000"
	I0916 10:05:23.743952    1530 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-138000"
	I0916 10:05:23.743966    1530 addons.go:234] Setting addon metrics-server=true in "addons-138000"
	I0916 10:05:23.743972    1530 addons.go:69] Setting storage-provisioner=true in profile "addons-138000"
	I0916 10:05:23.743976    1530 addons.go:234] Setting addon storage-provisioner=true in "addons-138000"
	I0916 10:05:23.743975    1530 config.go:182] Loaded profile config "addons-138000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:05:23.743981    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:23.743983    1530 addons.go:69] Setting default-storageclass=true in profile "addons-138000"
	I0916 10:05:23.743988    1530 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-138000"
	I0916 10:05:23.744004    1530 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-138000"
	I0916 10:05:23.744010    1530 addons.go:69] Setting ingress=true in profile "addons-138000"
	I0916 10:05:23.744017    1530 addons.go:234] Setting addon ingress=true in "addons-138000"
	I0916 10:05:23.744032    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:23.744038    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:23.744028    1530 addons.go:69] Setting gcp-auth=true in profile "addons-138000"
	I0916 10:05:23.744056    1530 mustload.go:65] Loading cluster: addons-138000
	I0916 10:05:23.744133    1530 config.go:182] Loaded profile config "addons-138000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:05:23.744129    1530 addons.go:69] Setting volcano=true in profile "addons-138000"
	I0916 10:05:23.744147    1530 addons.go:234] Setting addon volcano=true in "addons-138000"
	I0916 10:05:23.744168    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:23.744202    1530 addons.go:69] Setting ingress-dns=true in profile "addons-138000"
	I0916 10:05:23.744207    1530 addons.go:234] Setting addon ingress-dns=true in "addons-138000"
	I0916 10:05:23.744215    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:23.744288    1530 retry.go:31] will retry after 1.188101529s: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:23.744297    1530 addons.go:69] Setting volumesnapshots=true in profile "addons-138000"
	I0916 10:05:23.744301    1530 addons.go:234] Setting addon volumesnapshots=true in "addons-138000"
	I0916 10:05:23.744307    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:23.744343    1530 retry.go:31] will retry after 801.241517ms: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:23.743981    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:23.744410    1530 retry.go:31] will retry after 1.441407055s: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:23.743962    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:23.744452    1530 addons.go:69] Setting inspektor-gadget=true in profile "addons-138000"
	I0916 10:05:23.744456    1530 addons.go:234] Setting addon inspektor-gadget=true in "addons-138000"
	I0916 10:05:23.744463    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:23.744530    1530 retry.go:31] will retry after 1.483020366s: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:23.744608    1530 retry.go:31] will retry after 997.289426ms: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:23.743969    1530 addons.go:69] Setting registry=true in profile "addons-138000"
	I0916 10:05:23.744616    1530 addons.go:234] Setting addon registry=true in "addons-138000"
	I0916 10:05:23.744623    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:23.744717    1530 retry.go:31] will retry after 1.412558504s: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:23.743966    1530 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-138000"
	I0916 10:05:23.744730    1530 retry.go:31] will retry after 1.206177731s: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:23.744732    1530 retry.go:31] will retry after 1.441665291s: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:23.744742    1530 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-138000"
	I0916 10:05:23.744008    1530 addons.go:69] Setting cloud-spanner=true in profile "addons-138000"
	I0916 10:05:23.744791    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:23.744801    1530 addons.go:234] Setting addon cloud-spanner=true in "addons-138000"
	I0916 10:05:23.744823    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:23.744739    1530 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-138000"
	I0916 10:05:23.744834    1530 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-138000"
	I0916 10:05:23.744839    1530 retry.go:31] will retry after 808.19398ms: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:23.744753    1530 retry.go:31] will retry after 1.137756678s: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:23.744964    1530 retry.go:31] will retry after 615.768656ms: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:23.745116    1530 retry.go:31] will retry after 1.262123868s: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:23.745124    1530 retry.go:31] will retry after 1.398122176s: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:23.746516    1530 addons.go:234] Setting addon default-storageclass=true in "addons-138000"
	I0916 10:05:23.748611    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:23.748364    1530 out.go:177] * Verifying Kubernetes components...
	I0916 10:05:23.749163    1530 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:05:23.752725    1530 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:05:23.752742    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:23.755298    1530 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:05:23.759318    1530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:05:23.767342    1530 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:05:23.767349    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:05:23.767357    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:23.816154    1530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:05:23.874402    1530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:05:23.879806    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:05:23.978341    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:05:24.034681    1530 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0916 10:05:24.034914    1530 node_ready.go:35] waiting up to 6m0s for node "addons-138000" to be "Ready" ...
	I0916 10:05:24.042393    1530 node_ready.go:49] node "addons-138000" has status "Ready":"True"
	I0916 10:05:24.042412    1530 node_ready.go:38] duration metric: took 7.481042ms for node "addons-138000" to be "Ready" ...
	I0916 10:05:24.042416    1530 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:05:24.053539    1530 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace to be "Ready" ...
	I0916 10:05:24.362720    1530 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-138000"
	I0916 10:05:24.362744    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:24.405439    1530 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:05:24.414973    1530 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:05:24.418016    1530 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:05:24.418026    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:05:24.418037    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:24.452046    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:05:24.539171    1530 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-138000" context rescaled to 1 replicas
	I0916 10:05:24.550011    1530 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:05:24.554063    1530 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:05:24.554072    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:05:24.554082    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:24.559036    1530 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:05:24.561966    1530 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:05:24.566065    1530 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:05:24.566076    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:05:24.566087    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:24.611962    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:05:24.619813    1530 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:05:24.619824    1530 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:05:24.628728    1530 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:05:24.628737    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:05:24.638160    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:05:24.747001    1530 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:05:24.751028    1530 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:05:24.751039    1530 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:05:24.751049    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:24.815055    1530 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:05:24.815065    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:05:24.864251    1530 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:05:24.864267    1530 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:05:24.877907    1530 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:05:24.877919    1530 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:05:24.886848    1530 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 10:05:24.895827    1530 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 10:05:24.905823    1530 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 10:05:24.906827    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:05:24.909225    1530 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:05:24.909233    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 10:05:24.909241    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:24.936897    1530 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:05:24.946856    1530 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:05:24.953818    1530 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:05:24.959836    1530 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:05:24.963731    1530 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:05:24.963741    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:05:24.963752    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:24.967786    1530 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:05:24.967796    1530 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:05:24.967805    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:24.987142    1530 addons.go:475] Verifying addon registry=true in "addons-138000"
	I0916 10:05:24.992795    1530 out.go:177] * Verifying registry addon...
	I0916 10:05:25.001080    1530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:05:25.004700    1530 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:05:25.004708    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:25.009127    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:05:25.011782    1530 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:05:25.015893    1530 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:05:25.015904    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:05:25.015914    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:25.076176    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:05:25.082596    1530 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:05:25.082613    1530 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:05:25.112474    1530 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:05:25.112487    1530 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:05:25.136527    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:05:25.147811    1530 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:05:25.151897    1530 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:05:25.151908    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:05:25.151919    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:25.161666    1530 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:05:25.164832    1530 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:05:25.164842    1530 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:05:25.164852    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:25.186776    1530 retry.go:31] will retry after 2.225214588s: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/monitor: connect: connection refused
	I0916 10:05:25.201539    1530 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:05:25.201551    1530 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:05:25.230070    1530 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:05:25.233848    1530 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:05:25.237333    1530 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:05:25.237341    1530 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:05:25.241870    1530 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:05:25.241878    1530 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:05:25.241888    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:25.245783    1530 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:05:25.253685    1530 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:05:25.262789    1530 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:05:25.269807    1530 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:05:25.278822    1530 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:05:25.285799    1530 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:05:25.294818    1530 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:05:25.300840    1530 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:05:25.300857    1530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:05:25.300882    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:25.346459    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:05:25.359399    1530 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:05:25.359416    1530 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:05:25.424193    1530 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:05:25.424207    1530 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:05:25.453796    1530 addons.go:475] Verifying addon metrics-server=true in "addons-138000"
	I0916 10:05:25.462086    1530 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:05:25.462099    1530 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:05:25.504116    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:25.514234    1530 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:05:25.514246    1530 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:05:25.543388    1530 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:05:25.543399    1530 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:05:25.569364    1530 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:05:25.569377    1530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:05:25.597547    1530 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:05:25.597556    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:05:25.640095    1530 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:05:25.640109    1530 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:05:25.645205    1530 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:05:25.645219    1530 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:05:25.658522    1530 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:05:25.658534    1530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:05:25.749501    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:05:25.752174    1530 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:05:25.752185    1530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:05:25.766487    1530 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:05:25.766501    1530 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:05:25.780179    1530 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:05:25.780189    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:05:25.800963    1530 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:05:25.800976    1530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:05:25.818682    1530 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:05:25.818693    1530 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:05:25.838731    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:05:25.858105    1530 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:05:25.858117    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:05:25.880591    1530 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:05:25.880607    1530 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:05:25.924860    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:05:26.004646    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:26.018611    1530 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:05:26.018621    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:05:26.057648    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:26.088714    1530 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:05:26.088729    1530 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:05:26.193388    1530 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:05:26.193398    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:05:26.411135    1530 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:05:26.411148    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:05:26.523549    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:26.564942    1530 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:05:26.564954    1530 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:05:26.700302    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:05:27.004000    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:27.413492    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:27.696992    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:28.041603    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:28.140267    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:28.516384    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:28.843516    1530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.834438833s)
	I0916 10:05:28.843561    1530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.767435375s)
	I0916 10:05:28.843570    1530 addons.go:475] Verifying addon ingress=true in "addons-138000"
	I0916 10:05:28.843627    1530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.707153542s)
	I0916 10:05:28.843641    1530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.497234291s)
	I0916 10:05:28.843677    1530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.094217042s)
	I0916 10:05:28.843709    1530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.005010834s)
	I0916 10:05:28.843771    1530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.918949667s)
	W0916 10:05:28.844223    1530 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:05:28.844236    1530 retry.go:31] will retry after 220.712043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:05:28.845413    1530 out.go:177] * Verifying ingress addon...
	I0916 10:05:28.854204    1530 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-138000 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:05:28.857889    1530 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 10:05:28.879148    1530 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:05:28.879156    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:29.005377    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:29.066124    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:05:29.350606    1530 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.650328042s)
	I0916 10:05:29.350627    1530 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-138000"
	I0916 10:05:29.354374    1530 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:05:29.364840    1530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:05:29.368754    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:29.368906    1530 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:05:29.368912    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:29.503804    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:29.861031    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:29.867645    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:30.003654    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:30.361195    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:30.367086    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:30.504114    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:30.556209    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:30.861218    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:30.867808    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:31.003895    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:31.361228    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:31.367253    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:31.503947    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:31.884025    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:31.884072    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:32.003963    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:32.361708    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:32.367317    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:32.503810    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:32.557182    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:32.861518    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:32.869260    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:33.004196    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:33.361257    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:33.367122    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:33.503761    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:33.861905    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:33.867954    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:34.004579    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:34.364098    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:34.366902    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:34.503917    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:34.557214    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:34.860980    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:34.867384    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:35.004289    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:35.361922    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:35.369928    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:35.503955    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:35.861062    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:35.867170    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:36.004630    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:36.360687    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:36.367281    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:36.503884    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:36.557250    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:36.861220    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:36.866638    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:37.003676    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:37.360782    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:37.366970    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:37.503681    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:37.860804    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:37.867029    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:38.003712    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:38.018640    1530 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:05:38.018657    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:38.048904    1530 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:05:38.057903    1530 addons.go:234] Setting addon gcp-auth=true in "addons-138000"
	I0916 10:05:38.057925    1530 host.go:66] Checking if "addons-138000" exists ...
	I0916 10:05:38.058654    1530 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:05:38.058663    1530 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/addons-138000/id_rsa Username:docker}
	I0916 10:05:38.092636    1530 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:05:38.096640    1530 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:05:38.100676    1530 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:05:38.100682    1530 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:05:38.107139    1530 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:05:38.107145    1530 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:05:38.113487    1530 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:05:38.113495    1530 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:05:38.123702    1530 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:05:38.361013    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:38.426363    1530 addons.go:475] Verifying addon gcp-auth=true in "addons-138000"
	I0916 10:05:38.430023    1530 out.go:177] * Verifying gcp-auth addon...
	I0916 10:05:38.433222    1530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:05:38.460849    1530 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:05:38.461326    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:38.561059    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:38.988422    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:39.057121    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:39.089241    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:39.089636    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:39.360851    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:39.367233    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:39.572526    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:39.862263    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:39.868213    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:40.003772    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:40.360830    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:40.367239    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:40.503523    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:05:40.860923    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:40.866992    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:41.003408    1530 kapi.go:107] duration metric: took 16.002606917s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:05:41.360883    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:41.367062    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:41.557075    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:41.869580    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:41.870488    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:42.360781    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:42.367122    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:42.860939    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:42.866923    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:43.361571    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:43.367418    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:43.557060    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:43.860873    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:43.866898    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:44.369071    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:44.369196    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:44.861025    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:44.866931    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:45.360329    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:45.367992    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:45.559378    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:45.862775    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:45.869440    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:46.361894    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:46.367681    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:46.861549    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:46.868059    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:47.361481    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:47.367093    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:47.860579    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:47.867458    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:48.057947    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:48.361529    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:48.367291    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:48.861825    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:48.867072    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:49.362041    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:49.367138    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:49.863127    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:49.868062    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:50.123610    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:50.361842    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:50.366951    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:50.861685    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:50.866991    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:51.362308    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:51.366975    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:51.861508    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:51.867174    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:52.361593    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:52.367317    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:52.556560    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:52.861456    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:52.867209    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:53.361631    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:53.367122    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:53.861696    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:53.867409    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:54.361917    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:54.366776    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:54.557966    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:54.861561    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:54.867084    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:55.362353    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:55.371678    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:55.861493    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:55.867020    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:56.361812    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:56.367051    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:56.862677    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:56.867440    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:57.059909    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:57.362666    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:57.367254    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:57.860208    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:57.867396    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:58.361653    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:58.367198    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:58.861702    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:58.866877    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:59.361662    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:59.366878    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:05:59.557863    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:05:59.941297    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:05:59.941935    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:00.362095    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:00.366942    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:00.861773    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:00.867113    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:01.361274    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:01.366841    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:01.861367    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:01.866841    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:02.057582    1530 pod_ready.go:103] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"False"
	I0916 10:06:02.361575    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:02.366859    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:02.861227    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:02.962736    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:03.361617    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:03.367289    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:03.861735    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:03.867077    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:04.057818    1530 pod_ready.go:93] pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace has status "Ready":"True"
	I0916 10:06:04.057829    1530 pod_ready.go:82] duration metric: took 40.004975417s for pod "coredns-7c65d6cfc9-hvllr" in "kube-system" namespace to be "Ready" ...
	I0916 10:06:04.057834    1530 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-llt7q" in "kube-system" namespace to be "Ready" ...
	I0916 10:06:04.059077    1530 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-llt7q" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-llt7q" not found
	I0916 10:06:04.059084    1530 pod_ready.go:82] duration metric: took 1.246292ms for pod "coredns-7c65d6cfc9-llt7q" in "kube-system" namespace to be "Ready" ...
	E0916 10:06:04.059088    1530 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-llt7q" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-llt7q" not found
	I0916 10:06:04.059091    1530 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-138000" in "kube-system" namespace to be "Ready" ...
	I0916 10:06:04.061218    1530 pod_ready.go:93] pod "etcd-addons-138000" in "kube-system" namespace has status "Ready":"True"
	I0916 10:06:04.061224    1530 pod_ready.go:82] duration metric: took 2.1305ms for pod "etcd-addons-138000" in "kube-system" namespace to be "Ready" ...
	I0916 10:06:04.061228    1530 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-138000" in "kube-system" namespace to be "Ready" ...
	I0916 10:06:04.063683    1530 pod_ready.go:93] pod "kube-apiserver-addons-138000" in "kube-system" namespace has status "Ready":"True"
	I0916 10:06:04.063689    1530 pod_ready.go:82] duration metric: took 2.458042ms for pod "kube-apiserver-addons-138000" in "kube-system" namespace to be "Ready" ...
	I0916 10:06:04.063693    1530 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-138000" in "kube-system" namespace to be "Ready" ...
	I0916 10:06:04.067025    1530 pod_ready.go:93] pod "kube-controller-manager-addons-138000" in "kube-system" namespace has status "Ready":"True"
	I0916 10:06:04.067030    1530 pod_ready.go:82] duration metric: took 3.333334ms for pod "kube-controller-manager-addons-138000" in "kube-system" namespace to be "Ready" ...
	I0916 10:06:04.067034    1530 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zz4wb" in "kube-system" namespace to be "Ready" ...
	I0916 10:06:04.259380    1530 pod_ready.go:93] pod "kube-proxy-zz4wb" in "kube-system" namespace has status "Ready":"True"
	I0916 10:06:04.259390    1530 pod_ready.go:82] duration metric: took 192.354666ms for pod "kube-proxy-zz4wb" in "kube-system" namespace to be "Ready" ...
	I0916 10:06:04.259394    1530 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-138000" in "kube-system" namespace to be "Ready" ...
	I0916 10:06:04.362168    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:04.366749    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:04.658944    1530 pod_ready.go:93] pod "kube-scheduler-addons-138000" in "kube-system" namespace has status "Ready":"True"
	I0916 10:06:04.658953    1530 pod_ready.go:82] duration metric: took 399.562375ms for pod "kube-scheduler-addons-138000" in "kube-system" namespace to be "Ready" ...
	I0916 10:06:04.658956    1530 pod_ready.go:39] duration metric: took 40.617244708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:06:04.658966    1530 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:06:04.659037    1530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:06:04.672103    1530 api_server.go:72] duration metric: took 40.928951125s to wait for apiserver process to appear ...
	I0916 10:06:04.672114    1530 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:06:04.672124    1530 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0916 10:06:04.674714    1530 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0916 10:06:04.675248    1530 api_server.go:141] control plane version: v1.31.1
	I0916 10:06:04.675255    1530 api_server.go:131] duration metric: took 3.138ms to wait for apiserver health ...
	I0916 10:06:04.675258    1530 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:06:04.861441    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:04.862251    1530 system_pods.go:59] 17 kube-system pods found
	I0916 10:06:04.862262    1530 system_pods.go:61] "coredns-7c65d6cfc9-hvllr" [22e4ca2f-ef1f-410c-90d2-42cf2693f8c1] Running
	I0916 10:06:04.862266    1530 system_pods.go:61] "csi-hostpath-attacher-0" [e4edc0f8-f60f-43b3-a93a-b860040898e1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:06:04.862270    1530 system_pods.go:61] "csi-hostpath-resizer-0" [42287ae3-cbaf-451d-810c-43b8e3b840d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:06:04.862274    1530 system_pods.go:61] "csi-hostpathplugin-5jq7w" [c36ac78d-7884-4100-9453-c9ff15bc73e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:06:04.862277    1530 system_pods.go:61] "etcd-addons-138000" [a9ce5401-641b-4865-957e-4f051d483714] Running
	I0916 10:06:04.862280    1530 system_pods.go:61] "kube-apiserver-addons-138000" [7a2996dc-6c36-4759-a837-442f17636e26] Running
	I0916 10:06:04.862282    1530 system_pods.go:61] "kube-controller-manager-addons-138000" [e71e26a6-bc6e-4b8a-bcd8-3afe1f46e334] Running
	I0916 10:06:04.862284    1530 system_pods.go:61] "kube-ingress-dns-minikube" [4dfbbe11-f21e-40f4-868f-efa62b7ec9cb] Running
	I0916 10:06:04.862286    1530 system_pods.go:61] "kube-proxy-zz4wb" [a6974409-e4c4-47f5-92c0-cf8616a8ce2f] Running
	I0916 10:06:04.862288    1530 system_pods.go:61] "kube-scheduler-addons-138000" [1353fb32-673b-40cf-865c-c5325567edf9] Running
	I0916 10:06:04.862291    1530 system_pods.go:61] "metrics-server-84c5f94fbc-gscsm" [a83f5052-5c41-46c4-9356-5fb3b5b8759d] Running
	I0916 10:06:04.862310    1530 system_pods.go:61] "nvidia-device-plugin-daemonset-rzk7g" [95a607d1-6649-4565-b268-b0ee84e53c1b] Running
	I0916 10:06:04.862313    1530 system_pods.go:61] "registry-66c9cd494c-z5m97" [aaeea89d-dbf1-40ff-8089-7095e3cd9e2a] Running
	I0916 10:06:04.862314    1530 system_pods.go:61] "registry-proxy-lmvtl" [00468330-2389-470c-9e96-e57cad540e47] Running
	I0916 10:06:04.862316    1530 system_pods.go:61] "snapshot-controller-56fcc65765-h8cq5" [e72dd95d-2370-44d2-b57a-4167f65e8f3b] Running
	I0916 10:06:04.862318    1530 system_pods.go:61] "snapshot-controller-56fcc65765-xnw4k" [0ccb4f5f-6166-4d19-8fcb-dd64bb3f93c1] Running
	I0916 10:06:04.862321    1530 system_pods.go:61] "storage-provisioner" [4ac5f659-22d6-498d-a51f-983ce1bc22c0] Running
	I0916 10:06:04.862324    1530 system_pods.go:74] duration metric: took 187.066708ms to wait for pod list to return data ...
	I0916 10:06:04.862328    1530 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:06:04.866947    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:05.058412    1530 default_sa.go:45] found service account: "default"
	I0916 10:06:05.058423    1530 default_sa.go:55] duration metric: took 196.094042ms for default service account to be created ...
	I0916 10:06:05.058426    1530 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:06:05.261558    1530 system_pods.go:86] 17 kube-system pods found
	I0916 10:06:05.261570    1530 system_pods.go:89] "coredns-7c65d6cfc9-hvllr" [22e4ca2f-ef1f-410c-90d2-42cf2693f8c1] Running
	I0916 10:06:05.261575    1530 system_pods.go:89] "csi-hostpath-attacher-0" [e4edc0f8-f60f-43b3-a93a-b860040898e1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:06:05.261577    1530 system_pods.go:89] "csi-hostpath-resizer-0" [42287ae3-cbaf-451d-810c-43b8e3b840d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:06:05.261581    1530 system_pods.go:89] "csi-hostpathplugin-5jq7w" [c36ac78d-7884-4100-9453-c9ff15bc73e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:06:05.261583    1530 system_pods.go:89] "etcd-addons-138000" [a9ce5401-641b-4865-957e-4f051d483714] Running
	I0916 10:06:05.261585    1530 system_pods.go:89] "kube-apiserver-addons-138000" [7a2996dc-6c36-4759-a837-442f17636e26] Running
	I0916 10:06:05.261587    1530 system_pods.go:89] "kube-controller-manager-addons-138000" [e71e26a6-bc6e-4b8a-bcd8-3afe1f46e334] Running
	I0916 10:06:05.261590    1530 system_pods.go:89] "kube-ingress-dns-minikube" [4dfbbe11-f21e-40f4-868f-efa62b7ec9cb] Running
	I0916 10:06:05.261591    1530 system_pods.go:89] "kube-proxy-zz4wb" [a6974409-e4c4-47f5-92c0-cf8616a8ce2f] Running
	I0916 10:06:05.261593    1530 system_pods.go:89] "kube-scheduler-addons-138000" [1353fb32-673b-40cf-865c-c5325567edf9] Running
	I0916 10:06:05.261595    1530 system_pods.go:89] "metrics-server-84c5f94fbc-gscsm" [a83f5052-5c41-46c4-9356-5fb3b5b8759d] Running
	I0916 10:06:05.261597    1530 system_pods.go:89] "nvidia-device-plugin-daemonset-rzk7g" [95a607d1-6649-4565-b268-b0ee84e53c1b] Running
	I0916 10:06:05.261600    1530 system_pods.go:89] "registry-66c9cd494c-z5m97" [aaeea89d-dbf1-40ff-8089-7095e3cd9e2a] Running
	I0916 10:06:05.261601    1530 system_pods.go:89] "registry-proxy-lmvtl" [00468330-2389-470c-9e96-e57cad540e47] Running
	I0916 10:06:05.261603    1530 system_pods.go:89] "snapshot-controller-56fcc65765-h8cq5" [e72dd95d-2370-44d2-b57a-4167f65e8f3b] Running
	I0916 10:06:05.261604    1530 system_pods.go:89] "snapshot-controller-56fcc65765-xnw4k" [0ccb4f5f-6166-4d19-8fcb-dd64bb3f93c1] Running
	I0916 10:06:05.261606    1530 system_pods.go:89] "storage-provisioner" [4ac5f659-22d6-498d-a51f-983ce1bc22c0] Running
	I0916 10:06:05.261610    1530 system_pods.go:126] duration metric: took 203.184333ms to wait for k8s-apps to be running ...
	I0916 10:06:05.261614    1530 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:06:05.261691    1530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:06:05.267583    1530 system_svc.go:56] duration metric: took 5.967458ms WaitForService to wait for kubelet
	I0916 10:06:05.267591    1530 kubeadm.go:582] duration metric: took 41.524451583s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:06:05.267599    1530 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:06:05.361445    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:05.366613    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:05.458144    1530 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 10:06:05.458153    1530 node_conditions.go:123] node cpu capacity is 2
	I0916 10:06:05.458159    1530 node_conditions.go:105] duration metric: took 190.559041ms to run NodePressure ...
	I0916 10:06:05.458166    1530 start.go:241] waiting for startup goroutines ...
	I0916 10:06:05.861704    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:05.866732    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:06.361385    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:06.366846    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:06.862178    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:06.867081    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:07.361582    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:07.366679    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:07.861858    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:07.867250    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:08.361915    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:08.367008    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:08.867634    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:08.898613    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:09.362125    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:09.367191    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:09.861978    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:09.868902    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:10.361556    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:10.367560    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:10.861450    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:10.867054    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:11.360059    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:11.367067    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:11.861705    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:11.867664    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:12.361627    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:12.366953    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:12.900476    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:12.900921    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:13.361394    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:13.366562    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:13.864064    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:13.870497    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:14.361333    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:14.366808    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:14.861579    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:14.866573    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:15.361734    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:15.366327    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:15.861308    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:15.866770    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:16.361754    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:16.366827    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:16.861174    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:16.866810    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:17.361077    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:17.366705    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:17.861981    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:17.866815    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:18.361466    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:18.366997    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:18.861426    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:18.866710    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:19.361097    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:19.366819    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:19.861515    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:19.866385    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:20.363362    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:20.365874    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:20.861764    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:20.866604    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:21.361321    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:21.366413    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:21.861484    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:21.866376    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:22.361258    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:22.367574    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:22.861161    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:22.866553    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:23.361274    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:23.366436    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:23.861694    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:23.866827    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:24.361547    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:24.366541    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:24.862701    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:24.865863    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:25.360923    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:25.366460    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:25.861217    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:25.866542    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:26.361057    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:26.366596    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:26.862052    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:26.868617    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:27.360797    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:27.366100    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:27.860940    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:27.866283    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:28.361692    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:28.366762    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:28.862166    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:28.866863    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:06:29.365035    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:29.368587    1530 kapi.go:107] duration metric: took 1m0.004791208s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:06:29.868705    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:30.363845    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:30.869762    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:31.369433    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:31.862222    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:32.359419    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:32.861807    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:33.361422    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:33.860927    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:34.361434    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:34.860950    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:35.361228    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:35.861180    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:36.361475    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:36.860876    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:37.623089    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:37.861323    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:38.361201    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:38.861144    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:39.361044    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:39.864726    1530 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:06:40.361095    1530 kapi.go:107] duration metric: took 1m11.50445575s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:07:00.435769    1530 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:07:00.435779    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:07:00.935694    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:07:01.436055    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:07:01.935777    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:07:02.435598    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:07:02.935721    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:07:03.435841    1530 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:07:03.936209    1530 kapi.go:107] duration metric: took 1m25.504475167s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:07:03.939921    1530 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-138000 cluster.
	I0916 10:07:03.943750    1530 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:07:03.948896    1530 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:07:03.951944    1530 out.go:177] * Enabled addons: default-storageclass, ingress-dns, storage-provisioner-rancher, storage-provisioner, metrics-server, volcano, cloud-spanner, nvidia-device-plugin, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0916 10:07:03.956596    1530 addons.go:510] duration metric: took 1m40.214474292s for enable addons: enabled=[default-storageclass ingress-dns storage-provisioner-rancher storage-provisioner metrics-server volcano cloud-spanner nvidia-device-plugin inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0916 10:07:03.956629    1530 start.go:246] waiting for cluster config update ...
	I0916 10:07:03.956661    1530 start.go:255] writing updated cluster config ...
	I0916 10:07:03.962798    1530 ssh_runner.go:195] Run: rm -f paused
	I0916 10:07:04.123821    1530 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0916 10:07:04.127869    1530 out.go:201] 
	W0916 10:07:04.131889    1530 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0916 10:07:04.135863    1530 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0916 10:07:04.143058    1530 out.go:177] * Done! kubectl is now configured to use "addons-138000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 17:16:54 addons-138000 dockerd[1273]: time="2024-09-16T17:16:54.433623810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 17:16:54 addons-138000 dockerd[1273]: time="2024-09-16T17:16:54.433794390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 17:16:54 addons-138000 dockerd[1273]: time="2024-09-16T17:16:54.433872681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 17:16:54 addons-138000 dockerd[1266]: time="2024-09-16T17:16:54.903403393Z" level=info msg="ignoring event" container=cc2032ab1d384e11222f53d8723927679c7b40a69f1e9a2c0aa313834076669b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:16:54 addons-138000 dockerd[1273]: time="2024-09-16T17:16:54.903450100Z" level=info msg="shim disconnected" id=cc2032ab1d384e11222f53d8723927679c7b40a69f1e9a2c0aa313834076669b namespace=moby
	Sep 16 17:16:54 addons-138000 dockerd[1273]: time="2024-09-16T17:16:54.903479933Z" level=warning msg="cleaning up after shim disconnected" id=cc2032ab1d384e11222f53d8723927679c7b40a69f1e9a2c0aa313834076669b namespace=moby
	Sep 16 17:16:54 addons-138000 dockerd[1273]: time="2024-09-16T17:16:54.903484141Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 17:16:54 addons-138000 dockerd[1273]: time="2024-09-16T17:16:54.916091841Z" level=warning msg="cleanup warnings time=\"2024-09-16T17:16:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 16 17:16:54 addons-138000 dockerd[1266]: time="2024-09-16T17:16:54.941396614Z" level=info msg="ignoring event" container=091238f1ba472263f1b8f2be68488d8fa7bc8b76a2d88d0ef0649c06c1369209 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:16:54 addons-138000 dockerd[1273]: time="2024-09-16T17:16:54.942104143Z" level=info msg="shim disconnected" id=091238f1ba472263f1b8f2be68488d8fa7bc8b76a2d88d0ef0649c06c1369209 namespace=moby
	Sep 16 17:16:54 addons-138000 dockerd[1273]: time="2024-09-16T17:16:54.942172601Z" level=warning msg="cleaning up after shim disconnected" id=091238f1ba472263f1b8f2be68488d8fa7bc8b76a2d88d0ef0649c06c1369209 namespace=moby
	Sep 16 17:16:54 addons-138000 dockerd[1273]: time="2024-09-16T17:16:54.942190434Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 17:16:55 addons-138000 dockerd[1266]: time="2024-09-16T17:16:55.013434253Z" level=info msg="ignoring event" container=dc90959deae500e4ecf5fa52e3214128aec64f6d1c720ae583b03b45aa012d6c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:16:55 addons-138000 dockerd[1273]: time="2024-09-16T17:16:55.013751206Z" level=info msg="shim disconnected" id=dc90959deae500e4ecf5fa52e3214128aec64f6d1c720ae583b03b45aa012d6c namespace=moby
	Sep 16 17:16:55 addons-138000 dockerd[1273]: time="2024-09-16T17:16:55.014636441Z" level=warning msg="cleaning up after shim disconnected" id=dc90959deae500e4ecf5fa52e3214128aec64f6d1c720ae583b03b45aa012d6c namespace=moby
	Sep 16 17:16:55 addons-138000 dockerd[1273]: time="2024-09-16T17:16:55.014661816Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 17:16:55 addons-138000 dockerd[1273]: time="2024-09-16T17:16:55.020990460Z" level=warning msg="cleanup warnings time=\"2024-09-16T17:16:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 16 17:16:55 addons-138000 dockerd[1273]: time="2024-09-16T17:16:55.092250098Z" level=info msg="shim disconnected" id=517707d91134d92f08ddfd1a21e6dcb8b31898633ecf33c2179003e50174215e namespace=moby
	Sep 16 17:16:55 addons-138000 dockerd[1273]: time="2024-09-16T17:16:55.092404429Z" level=warning msg="cleaning up after shim disconnected" id=517707d91134d92f08ddfd1a21e6dcb8b31898633ecf33c2179003e50174215e namespace=moby
	Sep 16 17:16:55 addons-138000 dockerd[1273]: time="2024-09-16T17:16:55.092423345Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 17:16:55 addons-138000 dockerd[1266]: time="2024-09-16T17:16:55.093674241Z" level=info msg="ignoring event" container=517707d91134d92f08ddfd1a21e6dcb8b31898633ecf33c2179003e50174215e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:16:55 addons-138000 dockerd[1266]: time="2024-09-16T17:16:55.224342008Z" level=info msg="ignoring event" container=cd486017698a2be124b3ddcd37dd06fcc66c8bf7f877552d896ede5bd4826083 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:16:55 addons-138000 dockerd[1273]: time="2024-09-16T17:16:55.224673710Z" level=info msg="shim disconnected" id=cd486017698a2be124b3ddcd37dd06fcc66c8bf7f877552d896ede5bd4826083 namespace=moby
	Sep 16 17:16:55 addons-138000 dockerd[1273]: time="2024-09-16T17:16:55.224780250Z" level=warning msg="cleaning up after shim disconnected" id=cd486017698a2be124b3ddcd37dd06fcc66c8bf7f877552d896ede5bd4826083 namespace=moby
	Sep 16 17:16:55 addons-138000 dockerd[1273]: time="2024-09-16T17:16:55.224799167Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	d8b7b16847ecf       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  1 second ago        Running             hello-world-app            0                   29e1f49f09ec0       hello-world-app-55bf9c44b4-dwkrn
	36541a28ba35b       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                9 seconds ago       Running             nginx                      0                   574a29f907864       nginx
	8d07ce17ec318       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   742fcb2e25533       gcp-auth-89d5ffd79-n2pft
	966f9164678a2       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             10 minutes ago      Running             controller                 0                   ce93112f29da5       ingress-nginx-controller-bc57996ff-rtgw6
	ad5b38ea7d135       420193b27261a                                                                                                                10 minutes ago      Exited              patch                      1                   c6089d222bf61       ingress-nginx-admission-patch-7r69n
	f040de8c955b7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   10 minutes ago      Exited              create                     0                   14f4e7a4a55c6       ingress-nginx-admission-create-r6t2t
	2b5da7ab5a1e1       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        10 minutes ago      Running             yakd                       0                   caa3a91f286d7       yakd-dashboard-67d98fc6b-njjrz
	36d0436a66b4b       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     11 minutes ago      Running             nvidia-device-plugin-ctr   0                   b2044fb257c06       nvidia-device-plugin-daemonset-rzk7g
	95616ed96ddd1       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               11 minutes ago      Running             cloud-spanner-emulator     0                   167290cb3ced6       cloud-spanner-emulator-769b77f747-zq77f
	091238f1ba472       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              11 minutes ago      Exited              registry-proxy             0                   517707d91134d       registry-proxy-lmvtl
	5c527c08ab499       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       11 minutes ago      Running             local-path-provisioner     0                   e176ff435a005       local-path-provisioner-86d989889c-9q9v7
	be88ddf2f17b5       ba04bb24b9575                                                                                                                11 minutes ago      Running             storage-provisioner        0                   36e6322bcbdbc       storage-provisioner
	69d96c283eaaf       2f6c962e7b831                                                                                                                11 minutes ago      Running             coredns                    0                   00a6805b9510e       coredns-7c65d6cfc9-hvllr
	45b3640454d79       24a140c548c07                                                                                                                11 minutes ago      Running             kube-proxy                 0                   bd3cef8c7212a       kube-proxy-zz4wb
	9e05d747b9d57       27e3830e14027                                                                                                                11 minutes ago      Running             etcd                       0                   44196420cd30f       etcd-addons-138000
	29d58d25aa02f       d3f53a98c0a9d                                                                                                                11 minutes ago      Running             kube-apiserver             0                   6277db605d1bb       kube-apiserver-addons-138000
	0828348d56f5d       279f381cb3736                                                                                                                11 minutes ago      Running             kube-controller-manager    0                   beec5b7def222       kube-controller-manager-addons-138000
	dac010e790fd3       7f8aa378bb47d                                                                                                                11 minutes ago      Running             kube-scheduler             0                   30337f9530e89       kube-scheduler-addons-138000
	
	
	==> controller_ingress [966f9164678a] <==
	I0916 17:16:43.071273       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"309c6629-d449-47ac-a37f-91f1fced9091", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2664", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0916 17:16:43.088555       7 controller.go:213] "Backend successfully reloaded"
	I0916 17:16:43.088779       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-rtgw6", UID:"5b4f3a14-b3c2-4b6a-b18f-8af60987f559", APIVersion:"v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0916 17:16:46.405318       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0916 17:16:46.405386       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0916 17:16:46.420933       7 controller.go:213] "Backend successfully reloaded"
	I0916 17:16:46.421520       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-rtgw6", UID:"5b4f3a14-b3c2-4b6a-b18f-8af60987f559", APIVersion:"v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0916 17:16:52.326768       7 controller.go:1110] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0916 17:16:52.337930       7 admission.go:149] processed ingress via admission controller {testedIngressLength:2 testedIngressTime:0.011s renderingIngressLength:2 renderingIngressTime:0s admissionTime:0.011s testedConfigurationSize:26.2kB}
	I0916 17:16:52.338040       7 main.go:107] "successfully validated configuration, accepting" ingress="kube-system/example-ingress"
	I0916 17:16:52.421022       7 store.go:440] "Found valid IngressClass" ingress="kube-system/example-ingress" ingressclass="nginx"
	I0916 17:16:52.421272       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"b7a130c3-820f-4a71-a90f-5047c97ba8bb", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2704", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0916 17:16:53.071305       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0916 17:16:53.107574       7 controller.go:213] "Backend successfully reloaded"
	I0916 17:16:53.107735       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-rtgw6", UID:"5b4f3a14-b3c2-4b6a-b18f-8af60987f559", APIVersion:"v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0916 17:16:53.307305       7 sigterm.go:36] "Received SIGTERM, shutting down"
	I0916 17:16:53.307355       7 nginx.go:393] "Shutting down controller queues"
	E0916 17:16:53.308141       7 status.go:120] "error obtaining running IP address" err="pods is forbidden: User \"system:serviceaccount:ingress-nginx:ingress-nginx\" cannot list resource \"pods\" in API group \"\" in the namespace \"ingress-nginx\""
	I0916 17:16:53.308150       7 nginx.go:401] "Stopping admission controller"
	E0916 17:16:53.308211       7 nginx.go:340] "Error listening for TLS connections" err="http: Server closed"
	I0916 17:16:53.308243       7 nginx.go:409] "Stopping NGINX process"
	2024/09/16 17:16:53 [notice] 313#313: signal process started
	I0916 17:16:54.315071       7 nginx.go:422] "NGINX process has stopped"
	I0916 17:16:54.315081       7 sigterm.go:44] Handled quit, delaying controller exit for 10 seconds
	10.244.0.1 - - [16/Sep/2024:17:16:52 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.5.0" 80 0.001 [default-nginx-80] [] 10.244.0.31:80 615 0.001 200 c502f8c7bb04a3e4e40802316c4b29b6
	
	
	==> coredns [69d96c283eaa] <==
	[INFO] 10.244.0.22:52540 - 3074 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000025375s
	[INFO] 10.244.0.22:52802 - 197 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000081207s
	[INFO] 10.244.0.22:52540 - 50781 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000249995s
	[INFO] 10.244.0.22:52802 - 6794 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041291s
	[INFO] 10.244.0.22:52540 - 21458 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015792s
	[INFO] 10.244.0.22:52802 - 1239 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027624s
	[INFO] 10.244.0.22:52540 - 32052 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012917s
	[INFO] 10.244.0.22:52540 - 23142 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026666s
	[INFO] 10.244.0.22:52540 - 26530 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000032958s
	[INFO] 10.244.0.22:52802 - 34405 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000080582s
	[INFO] 10.244.0.22:52802 - 27714 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000022583s
	[INFO] 10.244.0.22:50956 - 8527 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000039332s
	[INFO] 10.244.0.22:50708 - 56185 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000122498s
	[INFO] 10.244.0.22:50956 - 5816 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00001275s
	[INFO] 10.244.0.22:50708 - 6543 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000012916s
	[INFO] 10.244.0.22:50956 - 63837 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011417s
	[INFO] 10.244.0.22:50708 - 36699 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010124s
	[INFO] 10.244.0.22:50708 - 6004 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011458s
	[INFO] 10.244.0.22:50956 - 9246 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033457s
	[INFO] 10.244.0.22:50708 - 61811 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011667s
	[INFO] 10.244.0.22:50708 - 28996 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013417s
	[INFO] 10.244.0.22:50956 - 23608 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011417s
	[INFO] 10.244.0.22:50956 - 51002 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010791s
	[INFO] 10.244.0.22:50956 - 50921 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000012917s
	[INFO] 10.244.0.22:50708 - 16867 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000011833s
	
	
	==> describe nodes <==
	Name:               addons-138000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-138000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=addons-138000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_05_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-138000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 17:05:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-138000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 17:16:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 17:16:54 +0000   Mon, 16 Sep 2024 17:05:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 17:16:54 +0000   Mon, 16 Sep 2024 17:05:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 17:16:54 +0000   Mon, 16 Sep 2024 17:05:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 17:16:54 +0000   Mon, 16 Sep 2024 17:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-138000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 08b551f829a74d40a46b1834fe74f1c4
	  System UUID:                08b551f829a74d40a46b1834fe74f1c4
	  Boot ID:                    6f4db5c5-2ce9-4e67-a619-bca667f6472d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-769b77f747-zq77f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-dwkrn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     registry-test                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gcp-auth                    gcp-auth-89d5ffd79-n2pft                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-7c65d6cfc9-hvllr                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-addons-138000                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-addons-138000               250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-138000      200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-zz4wb                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-138000               100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-rzk7g       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-9q9v7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-njjrz             0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             298Mi (7%)  426Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-138000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-138000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-138000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node addons-138000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node addons-138000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node addons-138000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11m                kubelet          Node addons-138000 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node addons-138000 event: Registered Node addons-138000 in Controller
	
	
	==> dmesg <==
	[  +5.038350] kauditd_printk_skb: 240 callbacks suppressed
	[  +5.452062] kauditd_printk_skb: 93 callbacks suppressed
	[  +5.257671] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.524380] kauditd_printk_skb: 11 callbacks suppressed
	[Sep16 17:06] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.176556] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.582492] kauditd_printk_skb: 30 callbacks suppressed
	[  +6.654815] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.893481] kauditd_printk_skb: 24 callbacks suppressed
	[ +13.589458] kauditd_printk_skb: 29 callbacks suppressed
	[ +12.491709] kauditd_printk_skb: 61 callbacks suppressed
	[Sep16 17:07] kauditd_printk_skb: 9 callbacks suppressed
	[ +10.773910] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.148628] kauditd_printk_skb: 2 callbacks suppressed
	[ +16.315186] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.788148] kauditd_printk_skb: 2 callbacks suppressed
	[Sep16 17:08] kauditd_printk_skb: 2 callbacks suppressed
	[Sep16 17:11] kauditd_printk_skb: 2 callbacks suppressed
	[Sep16 17:15] kauditd_printk_skb: 2 callbacks suppressed
	[Sep16 17:16] kauditd_printk_skb: 19 callbacks suppressed
	[ +16.588302] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.504072] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.426994] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.253120] kauditd_printk_skb: 4 callbacks suppressed
	[ +14.697849] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [9e05d747b9d5] <==
	{"level":"info","ts":"2024-09-16T17:05:39.544800Z","caller":"traceutil/trace.go:171","msg":"trace[1679528745] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1004; }","duration":"135.241174ms","start":"2024-09-16T17:05:39.409555Z","end":"2024-09-16T17:05:39.544796Z","steps":["trace[1679528745] 'agreement among raft nodes before linearized reading'  (duration: 135.213423ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:05:50.093294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.333515ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T17:05:50.093393Z","caller":"traceutil/trace.go:171","msg":"trace[2139794119] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1051; }","duration":"182.441685ms","start":"2024-09-16T17:05:49.910943Z","end":"2024-09-16T17:05:50.093385Z","steps":["trace[2139794119] 'range keys from in-memory index tree'  (duration: 182.299389ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:05:56.171029Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.721464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-hvllr\" ","response":"range_response_count:1 size:5093"}
	{"level":"info","ts":"2024-09-16T17:05:56.171102Z","caller":"traceutil/trace.go:171","msg":"trace[1361046657] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-hvllr; range_end:; response_count:1; response_revision:1063; }","duration":"139.826842ms","start":"2024-09-16T17:05:56.031267Z","end":"2024-09-16T17:05:56.171094Z","steps":["trace[1361046657] 'range keys from in-memory index tree'  (duration: 139.645379ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T17:05:59.915247Z","caller":"traceutil/trace.go:171","msg":"trace[250798638] transaction","detail":"{read_only:false; response_revision:1073; number_of_response:1; }","duration":"227.855437ms","start":"2024-09-16T17:05:59.687383Z","end":"2024-09-16T17:05:59.915238Z","steps":["trace[250798638] 'process raft request'  (duration: 227.77306ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T17:06:37.597415Z","caller":"traceutil/trace.go:171","msg":"trace[999221063] linearizableReadLoop","detail":"{readStateIndex:1291; appliedIndex:1290; }","duration":"305.978367ms","start":"2024-09-16T17:06:37.291424Z","end":"2024-09-16T17:06:37.597402Z","steps":["trace[999221063] 'read index received'  (duration: 305.898407ms)","trace[999221063] 'applied index is now lower than readState.Index'  (duration: 79.752µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T17:06:37.597461Z","caller":"traceutil/trace.go:171","msg":"trace[1812567428] transaction","detail":"{read_only:false; response_revision:1262; number_of_response:1; }","duration":"312.835618ms","start":"2024-09-16T17:06:37.284622Z","end":"2024-09-16T17:06:37.597458Z","steps":["trace[1812567428] 'process raft request'  (duration: 312.72274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:06:37.597516Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.818492ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.105.2\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-09-16T17:06:37.597533Z","caller":"traceutil/trace.go:171","msg":"trace[521815641] range","detail":"{range_begin:/registry/masterleases/192.168.105.2; range_end:; response_count:1; response_revision:1262; }","duration":"288.845534ms","start":"2024-09-16T17:06:37.308682Z","end":"2024-09-16T17:06:37.597528Z","steps":["trace[521815641] 'agreement among raft nodes before linearized reading'  (duration: 288.784407ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:06:37.597600Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"306.175414ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gadget/gadget-rjl25.17f5c85b94c43344\" ","response":"range_response_count:1 size:808"}
	{"level":"info","ts":"2024-09-16T17:06:37.597608Z","caller":"traceutil/trace.go:171","msg":"trace[1983157932] range","detail":"{range_begin:/registry/events/gadget/gadget-rjl25.17f5c85b94c43344; range_end:; response_count:1; response_revision:1262; }","duration":"306.183498ms","start":"2024-09-16T17:06:37.291422Z","end":"2024-09-16T17:06:37.597606Z","steps":["trace[1983157932] 'agreement among raft nodes before linearized reading'  (duration: 306.158622ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:06:37.597614Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T17:06:37.291405Z","time spent":"306.206623ms","remote":"127.0.0.1:54256","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":831,"request content":"key:\"/registry/events/gadget/gadget-rjl25.17f5c85b94c43344\" "}
	{"level":"warn","ts":"2024-09-16T17:06:37.597654Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.069924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-16T17:06:37.597654Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T17:06:37.284616Z","time spent":"312.852827ms","remote":"127.0.0.1:54706","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3288,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/mutatingwebhookconfigurations/volcano-admission-service-pods-mutate\" mod_revision:807 > success:<request_put:<key:\"/registry/mutatingwebhookconfigurations/volcano-admission-service-pods-mutate\" value_size:3203 >> failure:<request_range:<key:\"/registry/mutatingwebhookconfigurations/volcano-admission-service-pods-mutate\" > >"}
	{"level":"info","ts":"2024-09-16T17:06:37.597660Z","caller":"traceutil/trace.go:171","msg":"trace[773899459] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1262; }","duration":"185.07684ms","start":"2024-09-16T17:06:37.412581Z","end":"2024-09-16T17:06:37.597658Z","steps":["trace[773899459] 'agreement among raft nodes before linearized reading'  (duration: 185.064964ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:06:37.597697Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.264885ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-16T17:06:37.597703Z","caller":"traceutil/trace.go:171","msg":"trace[1563111252] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:1262; }","duration":"238.272094ms","start":"2024-09-16T17:06:37.359430Z","end":"2024-09-16T17:06:37.597702Z","steps":["trace[1563111252] 'agreement among raft nodes before linearized reading'  (duration: 238.259718ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:06:37.597739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.880395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T17:06:37.597748Z","caller":"traceutil/trace.go:171","msg":"trace[715339243] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1262; }","duration":"260.889478ms","start":"2024-09-16T17:06:37.336856Z","end":"2024-09-16T17:06:37.597746Z","steps":["trace[715339243] 'agreement among raft nodes before linearized reading'  (duration: 260.875728ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:07:27.408533Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.736731ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T17:07:27.408577Z","caller":"traceutil/trace.go:171","msg":"trace[1559157240] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1477; }","duration":"269.78444ms","start":"2024-09-16T17:07:27.138785Z","end":"2024-09-16T17:07:27.408570Z","steps":["trace[1559157240] 'range keys from in-memory index tree'  (duration: 269.730772ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T17:15:15.563604Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1880}
	{"level":"info","ts":"2024-09-16T17:15:15.660548Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1880,"took":"94.694353ms","hash":354055989,"current-db-size-bytes":8876032,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4874240,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-16T17:15:15.660579Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":354055989,"revision":1880,"compact-revision":-1}
	
	
	==> gcp-auth [8d07ce17ec31] <==
	2024/09/16 17:07:03 GCP Auth Webhook started!
	2024/09/16 17:07:20 Ready to marshal response ...
	2024/09/16 17:07:20 Ready to write response ...
	2024/09/16 17:07:21 Ready to marshal response ...
	2024/09/16 17:07:21 Ready to write response ...
	2024/09/16 17:07:43 Ready to marshal response ...
	2024/09/16 17:07:43 Ready to write response ...
	2024/09/16 17:07:43 Ready to marshal response ...
	2024/09/16 17:07:43 Ready to write response ...
	2024/09/16 17:07:43 Ready to marshal response ...
	2024/09/16 17:07:43 Ready to write response ...
	2024/09/16 17:15:52 Ready to marshal response ...
	2024/09/16 17:15:52 Ready to write response ...
	2024/09/16 17:15:54 Ready to marshal response ...
	2024/09/16 17:15:54 Ready to write response ...
	2024/09/16 17:16:11 Ready to marshal response ...
	2024/09/16 17:16:11 Ready to write response ...
	2024/09/16 17:16:43 Ready to marshal response ...
	2024/09/16 17:16:43 Ready to write response ...
	2024/09/16 17:16:52 Ready to marshal response ...
	2024/09/16 17:16:52 Ready to write response ...
	
	
	==> kernel <==
	 17:16:55 up 11 min,  0 users,  load average: 1.36, 0.84, 0.52
	Linux addons-138000 5.10.207 #1 SMP PREEMPT Mon Sep 16 12:01:57 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [29d58d25aa02] <==
	W0916 17:07:35.013390       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0916 17:07:35.013396       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0916 17:07:35.015047       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0916 17:07:35.154938       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W0916 17:07:35.155402       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	I0916 17:16:01.819945       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0916 17:16:26.992653       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:16:26.992669       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 17:16:27.001102       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:16:27.001140       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 17:16:27.014412       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:16:27.014655       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 17:16:27.043767       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:16:27.044758       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 17:16:27.109745       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:16:27.109758       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0916 17:16:28.044847       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0916 17:16:28.110087       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0916 17:16:28.119931       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0916 17:16:37.734340       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 17:16:38.745352       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0916 17:16:43.068901       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0916 17:16:43.169955       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.15.248"}
	I0916 17:16:43.625289       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0916 17:16:52.470365       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.10.182"}
	
	
	==> kube-controller-manager [0828348d56f5] <==
	E0916 17:16:45.174709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 17:16:47.812838       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0916 17:16:48.002943       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:16:48.003028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:16:48.028312       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:16:48.028374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:16:51.579070       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:16:51.579194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 17:16:52.341082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="17.262458ms"
	I0916 17:16:52.343969       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="2.842535ms"
	I0916 17:16:52.344807       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="32.375µs"
	I0916 17:16:52.347466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="16.749µs"
	I0916 17:16:53.248965       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0916 17:16:53.249016       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 17:16:53.275165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="3.125µs"
	I0916 17:16:53.275398       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0916 17:16:53.277144       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0916 17:16:53.635374       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0916 17:16:53.635450       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 17:16:54.234629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-138000"
	W0916 17:16:54.872178       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:16:54.872201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 17:16:54.885152       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="1.917µs"
	I0916 17:16:55.121365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.404476ms"
	I0916 17:16:55.121388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.667µs"
	
	
	==> kube-proxy [45b3640454d7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 17:05:24.320952       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 17:05:24.331914       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0916 17:05:24.331944       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 17:05:24.346334       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 17:05:24.346354       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 17:05:24.346369       1 server_linux.go:169] "Using iptables Proxier"
	I0916 17:05:24.347309       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 17:05:24.347425       1 server.go:483] "Version info" version="v1.31.1"
	I0916 17:05:24.347432       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 17:05:24.347901       1 config.go:199] "Starting service config controller"
	I0916 17:05:24.347908       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 17:05:24.347934       1 config.go:105] "Starting endpoint slice config controller"
	I0916 17:05:24.347937       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 17:05:24.348231       1 config.go:328] "Starting node config controller"
	I0916 17:05:24.348236       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 17:05:24.449236       1 shared_informer.go:320] Caches are synced for node config
	I0916 17:05:24.449257       1 shared_informer.go:320] Caches are synced for service config
	I0916 17:05:24.449268       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [dac010e790fd] <==
	W0916 17:05:16.054034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 17:05:16.054071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 17:05:16.054098       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 17:05:16.054120       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:05:16.056565       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 17:05:16.056584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 17:05:16.056615       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 17:05:16.056661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:05:16.056698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 17:05:16.056711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 17:05:16.056770       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 17:05:16.056791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:05:16.056919       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 17:05:16.056939       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 17:05:17.015695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 17:05:17.015749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:05:17.038663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 17:05:17.038698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 17:05:17.073628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 17:05:17.073695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:05:17.097197       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 17:05:17.097291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:05:17.156263       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 17:05:17.156288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 17:05:17.651871       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 17:16:53 addons-138000 kubelet[2029]: I0916 17:16:53.090179    2029 scope.go:117] "RemoveContainer" containerID="487dab20e39abbc6e71b12369845f9d2d3bfea41d7379690d8bef403720c614c"
	Sep 16 17:16:53 addons-138000 kubelet[2029]: E0916 17:16:53.090713    2029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 487dab20e39abbc6e71b12369845f9d2d3bfea41d7379690d8bef403720c614c" containerID="487dab20e39abbc6e71b12369845f9d2d3bfea41d7379690d8bef403720c614c"
	Sep 16 17:16:53 addons-138000 kubelet[2029]: I0916 17:16:53.090729    2029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"487dab20e39abbc6e71b12369845f9d2d3bfea41d7379690d8bef403720c614c"} err="failed to get container status \"487dab20e39abbc6e71b12369845f9d2d3bfea41d7379690d8bef403720c614c\": rpc error: code = Unknown desc = Error response from daemon: No such container: 487dab20e39abbc6e71b12369845f9d2d3bfea41d7379690d8bef403720c614c"
	Sep 16 17:16:53 addons-138000 kubelet[2029]: E0916 17:16:53.291543    2029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="3d457d95-115a-49e4-836d-4fffed376357"
	Sep 16 17:16:54 addons-138000 kubelet[2029]: E0916 17:16:54.291183    2029 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="153ebf2a-d5e5-498f-b167-7ce05c5cf9e2"
	Sep 16 17:16:54 addons-138000 kubelet[2029]: I0916 17:16:54.297015    2029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dfbbe11-f21e-40f4-868f-efa62b7ec9cb" path="/var/lib/kubelet/pods/4dfbbe11-f21e-40f4-868f-efa62b7ec9cb/volumes"
	Sep 16 17:16:54 addons-138000 kubelet[2029]: I0916 17:16:54.297848    2029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61a62bf9-9ceb-4009-af1c-469aa29d3dc0" path="/var/lib/kubelet/pods/61a62bf9-9ceb-4009-af1c-469aa29d3dc0/volumes"
	Sep 16 17:16:54 addons-138000 kubelet[2029]: I0916 17:16:54.298016    2029 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e51a2889-dca4-4a7e-bc5b-2a9e033f186f" path="/var/lib/kubelet/pods/e51a2889-dca4-4a7e-bc5b-2a9e033f186f/volumes"
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.135447    2029 scope.go:117] "RemoveContainer" containerID="cc2032ab1d384e11222f53d8723927679c7b40a69f1e9a2c0aa313834076669b"
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.138006    2029 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-dwkrn" podStartSLOduration=1.528337743 podStartE2EDuration="3.137997247s" podCreationTimestamp="2024-09-16 17:16:52 +0000 UTC" firstStartedPulling="2024-09-16 17:16:52.774446324 +0000 UTC m=+694.530986065" lastFinishedPulling="2024-09-16 17:16:54.38410587 +0000 UTC m=+696.140645569" observedRunningTime="2024-09-16 17:16:55.11023688 +0000 UTC m=+696.866776620" watchObservedRunningTime="2024-09-16 17:16:55.137997247 +0000 UTC m=+696.894536946"
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.160250    2029 scope.go:117] "RemoveContainer" containerID="cc2032ab1d384e11222f53d8723927679c7b40a69f1e9a2c0aa313834076669b"
	Sep 16 17:16:55 addons-138000 kubelet[2029]: E0916 17:16:55.160734    2029 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: cc2032ab1d384e11222f53d8723927679c7b40a69f1e9a2c0aa313834076669b" containerID="cc2032ab1d384e11222f53d8723927679c7b40a69f1e9a2c0aa313834076669b"
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.160752    2029 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"cc2032ab1d384e11222f53d8723927679c7b40a69f1e9a2c0aa313834076669b"} err="failed to get container status \"cc2032ab1d384e11222f53d8723927679c7b40a69f1e9a2c0aa313834076669b\": rpc error: code = Unknown desc = Error response from daemon: No such container: cc2032ab1d384e11222f53d8723927679c7b40a69f1e9a2c0aa313834076669b"
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.235212    2029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpslp\" (UniqueName: \"kubernetes.io/projected/00468330-2389-470c-9e96-e57cad540e47-kube-api-access-bpslp\") pod \"00468330-2389-470c-9e96-e57cad540e47\" (UID: \"00468330-2389-470c-9e96-e57cad540e47\") "
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.235236    2029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s4qc\" (UniqueName: \"kubernetes.io/projected/aaeea89d-dbf1-40ff-8089-7095e3cd9e2a-kube-api-access-8s4qc\") pod \"aaeea89d-dbf1-40ff-8089-7095e3cd9e2a\" (UID: \"aaeea89d-dbf1-40ff-8089-7095e3cd9e2a\") "
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.236092    2029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00468330-2389-470c-9e96-e57cad540e47-kube-api-access-bpslp" (OuterVolumeSpecName: "kube-api-access-bpslp") pod "00468330-2389-470c-9e96-e57cad540e47" (UID: "00468330-2389-470c-9e96-e57cad540e47"). InnerVolumeSpecName "kube-api-access-bpslp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.236971    2029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaeea89d-dbf1-40ff-8089-7095e3cd9e2a-kube-api-access-8s4qc" (OuterVolumeSpecName: "kube-api-access-8s4qc") pod "aaeea89d-dbf1-40ff-8089-7095e3cd9e2a" (UID: "aaeea89d-dbf1-40ff-8089-7095e3cd9e2a"). InnerVolumeSpecName "kube-api-access-8s4qc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.335678    2029 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8s4qc\" (UniqueName: \"kubernetes.io/projected/aaeea89d-dbf1-40ff-8089-7095e3cd9e2a-kube-api-access-8s4qc\") on node \"addons-138000\" DevicePath \"\""
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.335691    2029 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bpslp\" (UniqueName: \"kubernetes.io/projected/00468330-2389-470c-9e96-e57cad540e47-kube-api-access-bpslp\") on node \"addons-138000\" DevicePath \"\""
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.436442    2029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/153ebf2a-d5e5-498f-b167-7ce05c5cf9e2-gcp-creds\") pod \"153ebf2a-d5e5-498f-b167-7ce05c5cf9e2\" (UID: \"153ebf2a-d5e5-498f-b167-7ce05c5cf9e2\") "
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.436483    2029 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx4dn\" (UniqueName: \"kubernetes.io/projected/153ebf2a-d5e5-498f-b167-7ce05c5cf9e2-kube-api-access-tx4dn\") pod \"153ebf2a-d5e5-498f-b167-7ce05c5cf9e2\" (UID: \"153ebf2a-d5e5-498f-b167-7ce05c5cf9e2\") "
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.437617    2029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153ebf2a-d5e5-498f-b167-7ce05c5cf9e2-kube-api-access-tx4dn" (OuterVolumeSpecName: "kube-api-access-tx4dn") pod "153ebf2a-d5e5-498f-b167-7ce05c5cf9e2" (UID: "153ebf2a-d5e5-498f-b167-7ce05c5cf9e2"). InnerVolumeSpecName "kube-api-access-tx4dn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.437634    2029 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/153ebf2a-d5e5-498f-b167-7ce05c5cf9e2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "153ebf2a-d5e5-498f-b167-7ce05c5cf9e2" (UID: "153ebf2a-d5e5-498f-b167-7ce05c5cf9e2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.536882    2029 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tx4dn\" (UniqueName: \"kubernetes.io/projected/153ebf2a-d5e5-498f-b167-7ce05c5cf9e2-kube-api-access-tx4dn\") on node \"addons-138000\" DevicePath \"\""
	Sep 16 17:16:55 addons-138000 kubelet[2029]: I0916 17:16:55.536906    2029 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/153ebf2a-d5e5-498f-b167-7ce05c5cf9e2-gcp-creds\") on node \"addons-138000\" DevicePath \"\""
	
	
	==> storage-provisioner [be88ddf2f17b] <==
	I0916 17:05:25.631534       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 17:05:25.645744       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 17:05:25.645790       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 17:05:25.651307       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 17:05:25.651651       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-138000_726939e5-bbaa-4152-89ef-866ce7a8f99c!
	I0916 17:05:25.652251       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a60350e3-a094-4043-8ad5-d774ee2e5944", APIVersion:"v1", ResourceVersion:"507", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-138000_726939e5-bbaa-4152-89ef-866ce7a8f99c became leader
	I0916 17:05:25.752101       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-138000_726939e5-bbaa-4152-89ef-866ce7a8f99c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-138000 -n addons-138000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-138000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox registry-test registry-proxy-lmvtl
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-138000 describe pod busybox registry-test registry-proxy-lmvtl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-138000 describe pod busybox registry-test registry-proxy-lmvtl: exit status 1 (44.851541ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-138000/192.168.105.2
	Start Time:       Mon, 16 Sep 2024 10:07:43 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hmcpp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hmcpp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m12s                  default-scheduler  Successfully assigned default/busybox to addons-138000
	  Normal   Pulling    7m42s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m42s (x4 over 9m11s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m42s (x4 over 9m11s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m27s (x6 over 9m11s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m2s (x21 over 9m11s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:                      registry-test
	Namespace:                 default
	Priority:                  0
	Service Account:           default
	Node:                      addons-138000/192.168.105.2
	Start Time:                Mon, 16 Sep 2024 10:15:54 -0700
	Labels:                    run=registry-test
	Annotations:               <none>
	Status:                    Terminating (lasts <invalid>)
	Termination Grace Period:  30s
	IP:                        10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  registry-test:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Args:
	      sh
	      -c
	      wget --spider -S http://registry.kube-system.svc.cluster.local
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tx4dn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tx4dn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  61s                default-scheduler  Successfully assigned default/registry-test to addons-138000
	  Normal   Pulling    24s (x3 over 60s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox"
	  Warning  Failed     24s (x3 over 60s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/latest": unauthorized: authentication failed
	  Warning  Failed     24s (x3 over 60s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x4 over 60s)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox"
	  Warning  Failed     1s (x4 over 60s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-proxy-lmvtl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-138000 describe pod busybox registry-test registry-proxy-lmvtl: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.26s)

                                                
                                    
x
+
TestCertOptions (10.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-161000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-161000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.958053417s)

                                                
                                                
-- stdout --
	* [cert-options-161000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-161000" primary control-plane node in "cert-options-161000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-161000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-161000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-161000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-161000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-161000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.383209ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-161000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-161000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-161000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-161000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-161000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-161000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.88125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-161000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-161000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-161000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-161000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-161000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-16 10:42:06.113743 -0700 PDT m=+2264.260795835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-161000 -n cert-options-161000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-161000 -n cert-options-161000: exit status 7 (29.489292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-161000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-161000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-161000
--- FAIL: TestCertOptions (10.23s)

                                                
                                    
x
+
TestCertExpiration (195.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-913000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-913000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.948877584s)

                                                
                                                
-- stdout --
	* [cert-expiration-913000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-913000" primary control-plane node in "cert-expiration-913000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-913000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-913000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-913000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
E0916 10:42:04.093969    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-913000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-913000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.181784s)

                                                
                                                
-- stdout --
	* [cert-expiration-913000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-913000" primary control-plane node in "cert-expiration-913000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-913000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-913000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-913000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-913000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-913000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-913000" primary control-plane node in "cert-expiration-913000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-913000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-913000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-913000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-16 10:45:06.103403 -0700 PDT m=+2444.254696668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-913000 -n cert-expiration-913000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-913000 -n cert-expiration-913000: exit status 7 (56.709125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-913000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-913000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-913000
--- FAIL: TestCertExpiration (195.27s)

                                                
                                    
x
+
TestDockerFlags (10.19s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-534000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
E0916 10:41:49.870971    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-534000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.957919709s)

                                                
                                                
-- stdout --
	* [docker-flags-534000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-534000" primary control-plane node in "docker-flags-534000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-534000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:41:45.829321    3921 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:41:45.829449    3921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:41:45.829453    3921 out.go:358] Setting ErrFile to fd 2...
	I0916 10:41:45.829455    3921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:41:45.829590    3921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:41:45.830768    3921 out.go:352] Setting JSON to false
	I0916 10:41:45.846851    3921 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2469,"bootTime":1726506036,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:41:45.846919    3921 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:41:45.852869    3921 out.go:177] * [docker-flags-534000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:41:45.860712    3921 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:41:45.860834    3921 notify.go:220] Checking for updates...
	I0916 10:41:45.868732    3921 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:41:45.871666    3921 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:41:45.874695    3921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:41:45.877679    3921 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:41:45.880613    3921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:41:45.884044    3921 config.go:182] Loaded profile config "force-systemd-flag-626000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:41:45.884111    3921 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:41:45.884157    3921 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:41:45.888610    3921 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:41:45.895656    3921 start.go:297] selected driver: qemu2
	I0916 10:41:45.895663    3921 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:41:45.895669    3921 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:41:45.898104    3921 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:41:45.901675    3921 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:41:45.904741    3921 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0916 10:41:45.904764    3921 cni.go:84] Creating CNI manager for ""
	I0916 10:41:45.904792    3921 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:41:45.904797    3921 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:41:45.904827    3921 start.go:340] cluster config:
	{Name:docker-flags-534000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:41:45.908582    3921 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:41:45.916727    3921 out.go:177] * Starting "docker-flags-534000" primary control-plane node in "docker-flags-534000" cluster
	I0916 10:41:45.920639    3921 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:41:45.920654    3921 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:41:45.920665    3921 cache.go:56] Caching tarball of preloaded images
	I0916 10:41:45.920731    3921 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:41:45.920738    3921 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:41:45.920791    3921 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/docker-flags-534000/config.json ...
	I0916 10:41:45.920803    3921 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/docker-flags-534000/config.json: {Name:mkcd37baeb6e63851da1dec130906cdf3f9b1ade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:45.921113    3921 start.go:360] acquireMachinesLock for docker-flags-534000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:41:45.921149    3921 start.go:364] duration metric: took 30µs to acquireMachinesLock for "docker-flags-534000"
	I0916 10:41:45.921160    3921 start.go:93] Provisioning new machine with config: &{Name:docker-flags-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:41:45.921190    3921 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:41:45.928588    3921 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 10:41:45.947125    3921 start.go:159] libmachine.API.Create for "docker-flags-534000" (driver="qemu2")
	I0916 10:41:45.947158    3921 client.go:168] LocalClient.Create starting
	I0916 10:41:45.947229    3921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:41:45.947264    3921 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:45.947272    3921 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:45.947317    3921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:41:45.947341    3921 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:45.947349    3921 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:45.947778    3921 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:41:46.106860    3921 main.go:141] libmachine: Creating SSH key...
	I0916 10:41:46.250534    3921 main.go:141] libmachine: Creating Disk image...
	I0916 10:41:46.250540    3921 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:41:46.250754    3921 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/disk.qcow2
	I0916 10:41:46.260411    3921 main.go:141] libmachine: STDOUT: 
	I0916 10:41:46.260429    3921 main.go:141] libmachine: STDERR: 
	I0916 10:41:46.260483    3921 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/disk.qcow2 +20000M
	I0916 10:41:46.268380    3921 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:41:46.268394    3921 main.go:141] libmachine: STDERR: 
	I0916 10:41:46.268412    3921 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/disk.qcow2
	I0916 10:41:46.268418    3921 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:41:46.268427    3921 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:41:46.268454    3921 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:b7:be:05:62:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/disk.qcow2
	I0916 10:41:46.270088    3921 main.go:141] libmachine: STDOUT: 
	I0916 10:41:46.270103    3921 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:41:46.270133    3921 client.go:171] duration metric: took 322.975416ms to LocalClient.Create
	I0916 10:41:48.272254    3921 start.go:128] duration metric: took 2.351102125s to createHost
	I0916 10:41:48.272345    3921 start.go:83] releasing machines lock for "docker-flags-534000", held for 2.351241375s
	W0916 10:41:48.272395    3921 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:41:48.294413    3921 out.go:177] * Deleting "docker-flags-534000" in qemu2 ...
	W0916 10:41:48.320364    3921 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:41:48.320386    3921 start.go:729] Will try again in 5 seconds ...
	I0916 10:41:53.322518    3921 start.go:360] acquireMachinesLock for docker-flags-534000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:41:53.363761    3921 start.go:364] duration metric: took 41.107416ms to acquireMachinesLock for "docker-flags-534000"
	I0916 10:41:53.363925    3921 start.go:93] Provisioning new machine with config: &{Name:docker-flags-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:41:53.364230    3921 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:41:53.373828    3921 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 10:41:53.423073    3921 start.go:159] libmachine.API.Create for "docker-flags-534000" (driver="qemu2")
	I0916 10:41:53.423123    3921 client.go:168] LocalClient.Create starting
	I0916 10:41:53.423245    3921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:41:53.423308    3921 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:53.423322    3921 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:53.423383    3921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:41:53.423428    3921 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:53.423442    3921 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:53.424096    3921 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:41:53.627042    3921 main.go:141] libmachine: Creating SSH key...
	I0916 10:41:53.680118    3921 main.go:141] libmachine: Creating Disk image...
	I0916 10:41:53.680123    3921 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:41:53.680302    3921 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/disk.qcow2
	I0916 10:41:53.689417    3921 main.go:141] libmachine: STDOUT: 
	I0916 10:41:53.689437    3921 main.go:141] libmachine: STDERR: 
	I0916 10:41:53.689500    3921 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/disk.qcow2 +20000M
	I0916 10:41:53.697349    3921 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:41:53.697364    3921 main.go:141] libmachine: STDERR: 
	I0916 10:41:53.697375    3921 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/disk.qcow2
	I0916 10:41:53.697379    3921 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:41:53.697390    3921 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:41:53.697414    3921 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:4a:3b:20:82:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/docker-flags-534000/disk.qcow2
	I0916 10:41:53.699073    3921 main.go:141] libmachine: STDOUT: 
	I0916 10:41:53.699085    3921 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:41:53.699098    3921 client.go:171] duration metric: took 275.974625ms to LocalClient.Create
	I0916 10:41:55.701224    3921 start.go:128] duration metric: took 2.337020375s to createHost
	I0916 10:41:55.701281    3921 start.go:83] releasing machines lock for "docker-flags-534000", held for 2.337529042s
	W0916 10:41:55.701630    3921 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:41:55.716325    3921 out.go:201] 
	W0916 10:41:55.729500    3921 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:41:55.729538    3921 out.go:270] * 
	* 
	W0916 10:41:55.732390    3921 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:41:55.744202    3921 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-534000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-534000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-534000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.606708ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-534000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-534000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-534000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-534000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-534000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-534000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-534000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-534000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-534000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.554209ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-534000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-534000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-534000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-534000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-534000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-534000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-16 10:41:55.887087 -0700 PDT m=+2254.033899085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-534000 -n docker-flags-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-534000 -n docker-flags-534000: exit status 7 (29.749875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-534000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-534000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-534000
--- FAIL: TestDockerFlags (10.19s)

                                                
                                    
x
+
TestForceSystemdFlag (10.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-626000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-626000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.064401708s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-626000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-626000" primary control-plane node in "force-systemd-flag-626000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-626000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:41:40.742799    3897 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:41:40.742924    3897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:41:40.742928    3897 out.go:358] Setting ErrFile to fd 2...
	I0916 10:41:40.742930    3897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:41:40.743048    3897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:41:40.744139    3897 out.go:352] Setting JSON to false
	I0916 10:41:40.760209    3897 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2464,"bootTime":1726506036,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:41:40.760266    3897 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:41:40.766206    3897 out.go:177] * [force-systemd-flag-626000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:41:40.774079    3897 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:41:40.774119    3897 notify.go:220] Checking for updates...
	I0916 10:41:40.783071    3897 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:41:40.786054    3897 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:41:40.789093    3897 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:41:40.791975    3897 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:41:40.795035    3897 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:41:40.798382    3897 config.go:182] Loaded profile config "force-systemd-env-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:41:40.798460    3897 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:41:40.798503    3897 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:41:40.800909    3897 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:41:40.808063    3897 start.go:297] selected driver: qemu2
	I0916 10:41:40.808068    3897 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:41:40.808079    3897 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:41:40.810419    3897 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:41:40.812063    3897 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:41:40.815109    3897 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:41:40.815128    3897 cni.go:84] Creating CNI manager for ""
	I0916 10:41:40.815162    3897 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:41:40.815169    3897 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:41:40.815212    3897 start.go:340] cluster config:
	{Name:force-systemd-flag-626000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-626000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:41:40.818998    3897 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:41:40.827053    3897 out.go:177] * Starting "force-systemd-flag-626000" primary control-plane node in "force-systemd-flag-626000" cluster
	I0916 10:41:40.831016    3897 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:41:40.831033    3897 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:41:40.831045    3897 cache.go:56] Caching tarball of preloaded images
	I0916 10:41:40.831128    3897 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:41:40.831135    3897 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:41:40.831193    3897 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/force-systemd-flag-626000/config.json ...
	I0916 10:41:40.831204    3897 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/force-systemd-flag-626000/config.json: {Name:mkd3cb48f91aba9c06e810f58db8ec053555b69b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:40.831444    3897 start.go:360] acquireMachinesLock for force-systemd-flag-626000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:41:40.831481    3897 start.go:364] duration metric: took 30.167µs to acquireMachinesLock for "force-systemd-flag-626000"
	I0916 10:41:40.831493    3897 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-626000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-626000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:41:40.831519    3897 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:41:40.840065    3897 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 10:41:40.858136    3897 start.go:159] libmachine.API.Create for "force-systemd-flag-626000" (driver="qemu2")
	I0916 10:41:40.858166    3897 client.go:168] LocalClient.Create starting
	I0916 10:41:40.858228    3897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:41:40.858256    3897 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:40.858268    3897 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:40.858307    3897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:41:40.858330    3897 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:40.858344    3897 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:40.858677    3897 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:41:41.020530    3897 main.go:141] libmachine: Creating SSH key...
	I0916 10:41:41.117514    3897 main.go:141] libmachine: Creating Disk image...
	I0916 10:41:41.117520    3897 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:41:41.117694    3897 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/disk.qcow2
	I0916 10:41:41.127234    3897 main.go:141] libmachine: STDOUT: 
	I0916 10:41:41.127269    3897 main.go:141] libmachine: STDERR: 
	I0916 10:41:41.127333    3897 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/disk.qcow2 +20000M
	I0916 10:41:41.135098    3897 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:41:41.135113    3897 main.go:141] libmachine: STDERR: 
	I0916 10:41:41.135126    3897 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/disk.qcow2
	I0916 10:41:41.135130    3897 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:41:41.135145    3897 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:41:41.135181    3897 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:28:62:84:8c:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/disk.qcow2
	I0916 10:41:41.136791    3897 main.go:141] libmachine: STDOUT: 
	I0916 10:41:41.136805    3897 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:41:41.136825    3897 client.go:171] duration metric: took 278.659041ms to LocalClient.Create
	I0916 10:41:43.138946    3897 start.go:128] duration metric: took 2.3074605s to createHost
	I0916 10:41:43.139014    3897 start.go:83] releasing machines lock for "force-systemd-flag-626000", held for 2.307577666s
	W0916 10:41:43.139064    3897 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:41:43.161151    3897 out.go:177] * Deleting "force-systemd-flag-626000" in qemu2 ...
	W0916 10:41:43.186825    3897 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:41:43.186846    3897 start.go:729] Will try again in 5 seconds ...
	I0916 10:41:48.188973    3897 start.go:360] acquireMachinesLock for force-systemd-flag-626000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:41:48.272454    3897 start.go:364] duration metric: took 83.363958ms to acquireMachinesLock for "force-systemd-flag-626000"
	I0916 10:41:48.272649    3897 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-626000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-626000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:41:48.272845    3897 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:41:48.286480    3897 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 10:41:48.334681    3897 start.go:159] libmachine.API.Create for "force-systemd-flag-626000" (driver="qemu2")
	I0916 10:41:48.334741    3897 client.go:168] LocalClient.Create starting
	I0916 10:41:48.334862    3897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:41:48.334925    3897 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:48.334944    3897 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:48.335014    3897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:41:48.335067    3897 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:48.335079    3897 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:48.335625    3897 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:41:48.567800    3897 main.go:141] libmachine: Creating SSH key...
	I0916 10:41:48.688516    3897 main.go:141] libmachine: Creating Disk image...
	I0916 10:41:48.688522    3897 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:41:48.688712    3897 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/disk.qcow2
	I0916 10:41:48.698053    3897 main.go:141] libmachine: STDOUT: 
	I0916 10:41:48.698069    3897 main.go:141] libmachine: STDERR: 
	I0916 10:41:48.698127    3897 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/disk.qcow2 +20000M
	I0916 10:41:48.705993    3897 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:41:48.706010    3897 main.go:141] libmachine: STDERR: 
	I0916 10:41:48.706024    3897 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/disk.qcow2
	I0916 10:41:48.706033    3897 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:41:48.706040    3897 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:41:48.706069    3897 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:65:35:f2:0d:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-flag-626000/disk.qcow2
	I0916 10:41:48.707686    3897 main.go:141] libmachine: STDOUT: 
	I0916 10:41:48.707702    3897 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:41:48.707713    3897 client.go:171] duration metric: took 372.974083ms to LocalClient.Create
	I0916 10:41:50.709842    3897 start.go:128] duration metric: took 2.43702425s to createHost
	I0916 10:41:50.709951    3897 start.go:83] releasing machines lock for "force-systemd-flag-626000", held for 2.437495833s
	W0916 10:41:50.710411    3897 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-626000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-626000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:41:50.726653    3897 out.go:201] 
	W0916 10:41:50.742239    3897 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:41:50.742279    3897 out.go:270] * 
	* 
	W0916 10:41:50.744754    3897 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:41:50.765035    3897 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-626000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-626000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-626000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.099959ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-626000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-626000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-626000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-16 10:41:50.860141 -0700 PDT m=+2249.006834751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-626000 -n force-systemd-flag-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-626000 -n force-systemd-flag-626000: exit status 7 (34.302625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-626000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-626000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-626000
--- FAIL: TestForceSystemdFlag (10.26s)

                                                
                                    
x
+
TestForceSystemdEnv (10.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-836000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-836000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.192908625s)

                                                
                                                
-- stdout --
	* [force-systemd-env-836000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-836000" primary control-plane node in "force-systemd-env-836000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-836000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:41:35.444427    3865 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:41:35.444555    3865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:41:35.444559    3865 out.go:358] Setting ErrFile to fd 2...
	I0916 10:41:35.444561    3865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:41:35.444699    3865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:41:35.445815    3865 out.go:352] Setting JSON to false
	I0916 10:41:35.462529    3865 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2459,"bootTime":1726506036,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:41:35.462599    3865 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:41:35.468891    3865 out.go:177] * [force-systemd-env-836000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:41:35.477067    3865 notify.go:220] Checking for updates...
	I0916 10:41:35.481864    3865 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:41:35.489957    3865 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:41:35.498009    3865 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:41:35.505967    3865 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:41:35.514032    3865 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:41:35.524851    3865 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0916 10:41:35.529342    3865 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:41:35.529393    3865 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:41:35.533036    3865 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:41:35.541038    3865 start.go:297] selected driver: qemu2
	I0916 10:41:35.541047    3865 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:41:35.541052    3865 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:41:35.543452    3865 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:41:35.547958    3865 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:41:35.552021    3865 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:41:35.552038    3865 cni.go:84] Creating CNI manager for ""
	I0916 10:41:35.552064    3865 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:41:35.552073    3865 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:41:35.552108    3865 start.go:340] cluster config:
	{Name:force-systemd-env-836000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:41:35.555820    3865 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:41:35.563012    3865 out.go:177] * Starting "force-systemd-env-836000" primary control-plane node in "force-systemd-env-836000" cluster
	I0916 10:41:35.568003    3865 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:41:35.568028    3865 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:41:35.568041    3865 cache.go:56] Caching tarball of preloaded images
	I0916 10:41:35.568121    3865 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:41:35.568128    3865 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:41:35.568190    3865 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/force-systemd-env-836000/config.json ...
	I0916 10:41:35.568201    3865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/force-systemd-env-836000/config.json: {Name:mk6e1cc683746114f7237ae3c17aec719b22a975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:35.568493    3865 start.go:360] acquireMachinesLock for force-systemd-env-836000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:41:35.568537    3865 start.go:364] duration metric: took 30µs to acquireMachinesLock for "force-systemd-env-836000"
	I0916 10:41:35.568551    3865 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:41:35.568577    3865 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:41:35.572911    3865 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 10:41:35.589331    3865 start.go:159] libmachine.API.Create for "force-systemd-env-836000" (driver="qemu2")
	I0916 10:41:35.589361    3865 client.go:168] LocalClient.Create starting
	I0916 10:41:35.589441    3865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:41:35.589471    3865 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:35.589480    3865 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:35.589523    3865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:41:35.589551    3865 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:35.589564    3865 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:35.589961    3865 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:41:35.774303    3865 main.go:141] libmachine: Creating SSH key...
	I0916 10:41:35.903403    3865 main.go:141] libmachine: Creating Disk image...
	I0916 10:41:35.903412    3865 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:41:35.903610    3865 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/disk.qcow2
	I0916 10:41:35.913191    3865 main.go:141] libmachine: STDOUT: 
	I0916 10:41:35.913206    3865 main.go:141] libmachine: STDERR: 
	I0916 10:41:35.913258    3865 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/disk.qcow2 +20000M
	I0916 10:41:35.921390    3865 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:41:35.921407    3865 main.go:141] libmachine: STDERR: 
	I0916 10:41:35.921420    3865 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/disk.qcow2
	I0916 10:41:35.921428    3865 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:41:35.921443    3865 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:41:35.921482    3865 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:4a:79:2d:81:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/disk.qcow2
	I0916 10:41:35.923141    3865 main.go:141] libmachine: STDOUT: 
	I0916 10:41:35.923157    3865 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:41:35.923187    3865 client.go:171] duration metric: took 333.829125ms to LocalClient.Create
	I0916 10:41:37.925385    3865 start.go:128] duration metric: took 2.356831709s to createHost
	I0916 10:41:37.925470    3865 start.go:83] releasing machines lock for "force-systemd-env-836000", held for 2.35697875s
	W0916 10:41:37.925619    3865 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:41:37.933782    3865 out.go:177] * Deleting "force-systemd-env-836000" in qemu2 ...
	W0916 10:41:37.964359    3865 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:41:37.964385    3865 start.go:729] Will try again in 5 seconds ...
	I0916 10:41:42.966522    3865 start.go:360] acquireMachinesLock for force-systemd-env-836000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:41:43.139164    3865 start.go:364] duration metric: took 172.508958ms to acquireMachinesLock for "force-systemd-env-836000"
	I0916 10:41:43.139293    3865 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:41:43.139518    3865 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:41:43.152167    3865 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 10:41:43.203683    3865 start.go:159] libmachine.API.Create for "force-systemd-env-836000" (driver="qemu2")
	I0916 10:41:43.203730    3865 client.go:168] LocalClient.Create starting
	I0916 10:41:43.203838    3865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:41:43.203899    3865 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:43.203916    3865 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:43.203971    3865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:41:43.204016    3865 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:43.204033    3865 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:43.204586    3865 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:41:43.434780    3865 main.go:141] libmachine: Creating SSH key...
	I0916 10:41:43.529231    3865 main.go:141] libmachine: Creating Disk image...
	I0916 10:41:43.529241    3865 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:41:43.529416    3865 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/disk.qcow2
	I0916 10:41:43.539809    3865 main.go:141] libmachine: STDOUT: 
	I0916 10:41:43.539830    3865 main.go:141] libmachine: STDERR: 
	I0916 10:41:43.539895    3865 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/disk.qcow2 +20000M
	I0916 10:41:43.547801    3865 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:41:43.547814    3865 main.go:141] libmachine: STDERR: 
	I0916 10:41:43.547835    3865 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/disk.qcow2
	I0916 10:41:43.547847    3865 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:41:43.547856    3865 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:41:43.547888    3865 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:cc:17:18:8a:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/force-systemd-env-836000/disk.qcow2
	I0916 10:41:43.549578    3865 main.go:141] libmachine: STDOUT: 
	I0916 10:41:43.549595    3865 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:41:43.549607    3865 client.go:171] duration metric: took 345.878791ms to LocalClient.Create
	I0916 10:41:45.551726    3865 start.go:128] duration metric: took 2.412232166s to createHost
	I0916 10:41:45.551829    3865 start.go:83] releasing machines lock for "force-systemd-env-836000", held for 2.412680791s
	W0916 10:41:45.552240    3865 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-836000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-836000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:41:45.569919    3865 out.go:201] 
	W0916 10:41:45.579762    3865 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:41:45.579812    3865 out.go:270] * 
	* 
	W0916 10:41:45.582379    3865 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:41:45.593579    3865 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-836000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-836000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-836000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.847042ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-836000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-836000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-836000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-16 10:41:45.688874 -0700 PDT m=+2243.835446085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-836000 -n force-systemd-env-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-836000 -n force-systemd-env-836000: exit status 7 (34.940667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-836000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-836000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-836000
--- FAIL: TestForceSystemdEnv (10.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (41.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-510000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-510000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-h9zb4" [24091e34-07ae-4857-8c51-9e5f39d97e78] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0916 10:22:04.121829    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:22:04.129867    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:22:04.143260    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:22:04.166613    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:22:04.209984    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:22:04.293346    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:22:04.456682    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:22:04.780038    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:22:05.423503    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-65d86f57f4-h9zb4" [24091e34-07ae-4857-8c51-9e5f39d97e78] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0916 10:22:06.705323    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.005714292s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30813
functional_test.go:1661: error fetching http://192.168.105.4:30813: Get "http://192.168.105.4:30813": dial tcp 192.168.105.4:30813: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30813: Get "http://192.168.105.4:30813": dial tcp 192.168.105.4:30813: connect: connection refused
E0916 10:22:14.392460    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:30813: Get "http://192.168.105.4:30813": dial tcp 192.168.105.4:30813: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30813: Get "http://192.168.105.4:30813": dial tcp 192.168.105.4:30813: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30813: Get "http://192.168.105.4:30813": dial tcp 192.168.105.4:30813: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30813: Get "http://192.168.105.4:30813": dial tcp 192.168.105.4:30813: connect: connection refused
E0916 10:22:24.635737    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:30813: Get "http://192.168.105.4:30813": dial tcp 192.168.105.4:30813: connect: connection refused
2024/09/16 10:22:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1661: error fetching http://192.168.105.4:30813: Get "http://192.168.105.4:30813": dial tcp 192.168.105.4:30813: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30813: Get "http://192.168.105.4:30813": dial tcp 192.168.105.4:30813: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-510000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-h9zb4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-510000/192.168.105.4
Start Time:       Mon, 16 Sep 2024 10:22:01 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://5727ceab99441575b663c2b6017dd7dc3f45809395c77d0612df121f6a9b8c42
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 16 Sep 2024 10:22:22 -0700
Finished:     Mon, 16 Sep 2024 10:22:22 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4dfnx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4dfnx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  40s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-h9zb4 to functional-510000
Normal   Pulling    39s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     36s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.223s (3.223s including waiting). Image size: 84957542 bytes.
Normal   Created    19s (x3 over 36s)  kubelet            Created container echoserver-arm
Normal   Started    19s (x3 over 36s)  kubelet            Started container echoserver-arm
Normal   Pulled     19s (x2 over 36s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    7s (x4 over 35s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-h9zb4_default(24091e34-07ae-4857-8c51-9e5f39d97e78)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-510000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-510000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.30.169
IPs:                      10.97.30.169
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30813/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-510000 -n functional-510000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-510000                                                                                                | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port797183888/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-510000 ssh findmnt                                                                                       | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT | 16 Sep 24 10:22 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-510000 ssh -- ls                                                                                         | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT | 16 Sep 24 10:22 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-510000 ssh sudo                                                                                          | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-510000                                                                                                | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3215452008/001:/mount2  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-510000                                                                                                | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3215452008/001:/mount1  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-510000                                                                                                | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3215452008/001:/mount3  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-510000 ssh findmnt                                                                                       | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-510000 ssh findmnt                                                                                       | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-510000 ssh findmnt                                                                                       | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT | 16 Sep 24 10:22 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-510000 ssh findmnt                                                                                       | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT | 16 Sep 24 10:22 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-510000 ssh findmnt                                                                                       | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT | 16 Sep 24 10:22 PDT |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-510000                                                                                                | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT |                     |
	|           | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| start     | -p functional-510000                                                                                                | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-510000                                                                                                | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-510000 --dry-run                                                                                      | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT | 16 Sep 24 10:22 PDT |
	|           | -p functional-510000                                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| license   |                                                                                                                     | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT | 16 Sep 24 10:22 PDT |
	| ssh       | functional-510000 ssh sudo                                                                                          | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT |                     |
	|           | systemctl is-active crio                                                                                            |                   |         |         |                     |                     |
	| image     | functional-510000 image load --daemon                                                                               | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT | 16 Sep 24 10:22 PDT |
	|           | kicbase/echo-server:functional-510000                                                                               |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                                                                   |                   |         |         |                     |                     |
	| image     | functional-510000 image ls                                                                                          | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT | 16 Sep 24 10:22 PDT |
	| image     | functional-510000 image load --daemon                                                                               | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT | 16 Sep 24 10:22 PDT |
	|           | kicbase/echo-server:functional-510000                                                                               |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                                                                   |                   |         |         |                     |                     |
	| image     | functional-510000 image ls                                                                                          | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT | 16 Sep 24 10:22 PDT |
	| image     | functional-510000 image load --daemon                                                                               | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT | 16 Sep 24 10:22 PDT |
	|           | kicbase/echo-server:functional-510000                                                                               |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                                                                   |                   |         |         |                     |                     |
	| image     | functional-510000 image ls                                                                                          | functional-510000 | jenkins | v1.34.0 | 16 Sep 24 10:22 PDT | 16 Sep 24 10:22 PDT |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:22:30
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:22:30.752889    2236 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:22:30.753029    2236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:30.753033    2236 out.go:358] Setting ErrFile to fd 2...
	I0916 10:22:30.753035    2236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:30.753174    2236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:22:30.754211    2236 out.go:352] Setting JSON to false
	I0916 10:22:30.770372    2236 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1314,"bootTime":1726506036,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:22:30.770444    2236 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:22:30.775562    2236 out.go:177] * [functional-510000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:22:30.782450    2236 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:22:30.782513    2236 notify.go:220] Checking for updates...
	I0916 10:22:30.789562    2236 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:22:30.792489    2236 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:22:30.795545    2236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:22:30.798484    2236 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:22:30.801536    2236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:22:30.804784    2236 config.go:182] Loaded profile config "functional-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:22:30.805037    2236 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:22:30.808490    2236 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:22:30.815528    2236 start.go:297] selected driver: qemu2
	I0916 10:22:30.815536    2236 start.go:901] validating driver "qemu2" against &{Name:functional-510000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-510000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:30.815599    2236 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:22:30.817856    2236 cni.go:84] Creating CNI manager for ""
	I0916 10:22:30.817892    2236 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:22:30.817939    2236 start.go:340] cluster config:
	{Name:functional-510000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-510000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:30.828514    2236 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 16 17:22:31 functional-510000 dockerd[5959]: time="2024-09-16T17:22:31.960809941Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 16 17:22:32 functional-510000 dockerd[5965]: time="2024-09-16T17:22:32.009709942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 17:22:32 functional-510000 dockerd[5965]: time="2024-09-16T17:22:32.009745581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 17:22:32 functional-510000 dockerd[5965]: time="2024-09-16T17:22:32.009755001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 17:22:32 functional-510000 dockerd[5965]: time="2024-09-16T17:22:32.009805271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 17:22:32 functional-510000 cri-dockerd[6213]: time="2024-09-16T17:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2efb6e1c66beed3550d0bd402be62339a3555f3ced7a5d25fd78a2ff2ba5fe58/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 16 17:22:32 functional-510000 dockerd[5965]: time="2024-09-16T17:22:32.060307463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 17:22:32 functional-510000 dockerd[5965]: time="2024-09-16T17:22:32.060348521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 17:22:32 functional-510000 dockerd[5965]: time="2024-09-16T17:22:32.060356858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 17:22:32 functional-510000 dockerd[5965]: time="2024-09-16T17:22:32.060403918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 17:22:32 functional-510000 dockerd[5959]: time="2024-09-16T17:22:32.091380236Z" level=info msg="ignoring event" container=e96ec2f4bccb70ce40437a1d556a8d09f5219a6e0d5a5183ac3347b93ac21358 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:22:32 functional-510000 dockerd[5965]: time="2024-09-16T17:22:32.091551221Z" level=info msg="shim disconnected" id=e96ec2f4bccb70ce40437a1d556a8d09f5219a6e0d5a5183ac3347b93ac21358 namespace=moby
	Sep 16 17:22:32 functional-510000 dockerd[5965]: time="2024-09-16T17:22:32.091584401Z" level=warning msg="cleaning up after shim disconnected" id=e96ec2f4bccb70ce40437a1d556a8d09f5219a6e0d5a5183ac3347b93ac21358 namespace=moby
	Sep 16 17:22:32 functional-510000 dockerd[5965]: time="2024-09-16T17:22:32.091590028Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 17:22:36 functional-510000 cri-dockerd[6213]: time="2024-09-16T17:22:36Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 16 17:22:36 functional-510000 dockerd[5959]: time="2024-09-16T17:22:36.914829038Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 16 17:22:36 functional-510000 dockerd[5965]: time="2024-09-16T17:22:36.977614942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 17:22:36 functional-510000 dockerd[5965]: time="2024-09-16T17:22:36.977642828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 17:22:36 functional-510000 dockerd[5965]: time="2024-09-16T17:22:36.977651164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 17:22:36 functional-510000 dockerd[5965]: time="2024-09-16T17:22:36.977685178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 17:22:38 functional-510000 cri-dockerd[6213]: time="2024-09-16T17:22:38Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 16 17:22:38 functional-510000 dockerd[5965]: time="2024-09-16T17:22:38.850148190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 17:22:38 functional-510000 dockerd[5965]: time="2024-09-16T17:22:38.850439598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 17:22:38 functional-510000 dockerd[5965]: time="2024-09-16T17:22:38.850452145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 17:22:38 functional-510000 dockerd[5965]: time="2024-09-16T17:22:38.850525174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	bd9efc1d4934b       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   4 seconds ago        Running             dashboard-metrics-scraper   0                   2efb6e1c66bee       dashboard-metrics-scraper-c5db448b4-mkl7m
	0f898a21a223f       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         6 seconds ago        Running             kubernetes-dashboard        0                   e92313f61e22c       kubernetes-dashboard-695b96c756-qmd8v
	e96ec2f4bccb7       72565bf5bbedf                                                                                          10 seconds ago       Exited              echoserver-arm              2                   667f92385be3a       hello-node-64b4f8f9ff-zl5k5
	51cf086e5f057       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    18 seconds ago       Exited              mount-munger                0                   004d68ca7c220       busybox-mount
	5727ceab99441       72565bf5bbedf                                                                                          20 seconds ago       Exited              echoserver-arm              2                   68ee2bcaed865       hello-node-connect-65d86f57f4-h9zb4
	22490a4087a33       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                          34 seconds ago       Running             myfrontend                  0                   daa3fd24b8dfc       sp-pod
	9f428c4059700       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                          49 seconds ago       Running             nginx                       0                   547b92192bffe       nginx-svc
	13dfbb28dd293       2f6c962e7b831                                                                                          About a minute ago   Running             coredns                     2                   d99ba8843d256       coredns-7c65d6cfc9-nlggf
	b84a1b6e5d124       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         2                   b9bacb0010b75       storage-provisioner
	f3a3c0f3b1015       24a140c548c07                                                                                          About a minute ago   Running             kube-proxy                  2                   8807eba31d6a4       kube-proxy-4wh6v
	fe448a8445a19       7f8aa378bb47d                                                                                          About a minute ago   Running             kube-scheduler              2                   a31077b40ad69       kube-scheduler-functional-510000
	ef2cd9ae7d38a       279f381cb3736                                                                                          About a minute ago   Running             kube-controller-manager     2                   24f62a5e9a214       kube-controller-manager-functional-510000
	6194d6dd67280       27e3830e14027                                                                                          About a minute ago   Running             etcd                        2                   2a9bd802fdf58       etcd-functional-510000
	25e56a2551011       d3f53a98c0a9d                                                                                          About a minute ago   Running             kube-apiserver              0                   d06052aa051a9       kube-apiserver-functional-510000
	c88711c7a5f61       2f6c962e7b831                                                                                          2 minutes ago        Exited              coredns                     1                   63b4495293f8f       coredns-7c65d6cfc9-nlggf
	4244a4a987cec       ba04bb24b9575                                                                                          2 minutes ago        Exited              storage-provisioner         1                   cb56de1407bc4       storage-provisioner
	2741039616c07       24a140c548c07                                                                                          2 minutes ago        Exited              kube-proxy                  1                   5a78667cb3994       kube-proxy-4wh6v
	f7656fa949eec       27e3830e14027                                                                                          2 minutes ago        Exited              etcd                        1                   99bc5c40411bf       etcd-functional-510000
	c3c1f6472605f       7f8aa378bb47d                                                                                          2 minutes ago        Exited              kube-scheduler              1                   072319ef951f7       kube-scheduler-functional-510000
	ae6c11402b8f6       279f381cb3736                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   37ee1b2579992       kube-controller-manager-functional-510000
	
	
	==> coredns [13dfbb28dd29] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33241 - 24645 "HINFO IN 4287184282096917642.5418898482276361811. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009856375s
	[INFO] 10.244.0.1:32195 - 13247 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000125763s
	[INFO] 10.244.0.1:14296 - 61456 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000093374s
	[INFO] 10.244.0.1:34024 - 45987 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001627702s
	[INFO] 10.244.0.1:18893 - 24879 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000070989s
	[INFO] 10.244.0.1:46092 - 18451 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000068405s
	[INFO] 10.244.0.1:53252 - 5423 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000085912s
	
	
	==> coredns [c88711c7a5f6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40495 - 46626 "HINFO IN 6166690267326439753.378088487901927469. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.011567578s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-510000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-510000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=functional-510000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_20_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 17:20:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-510000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 17:22:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 17:22:24 +0000   Mon, 16 Sep 2024 17:20:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 17:22:24 +0000   Mon, 16 Sep 2024 17:20:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 17:22:24 +0000   Mon, 16 Sep 2024 17:20:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 17:22:24 +0000   Mon, 16 Sep 2024 17:20:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-510000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 315ea3cc5295455da4d0a6dd17b4fc50
	  System UUID:                315ea3cc5295455da4d0a6dd17b4fc50
	  Boot ID:                    ec8c3c29-dc95-4b52-aba1-6760e38e2c84
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-zl5k5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  default                     hello-node-connect-65d86f57f4-h9zb4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 coredns-7c65d6cfc9-nlggf                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m32s
	  kube-system                 etcd-functional-510000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m38s
	  kube-system                 kube-apiserver-functional-510000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-functional-510000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kube-proxy-4wh6v                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-scheduler-functional-510000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-mkl7m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-qmd8v        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m31s                kube-proxy       
	  Normal  Starting                 78s                  kube-proxy       
	  Normal  Starting                 2m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m38s                kubelet          Node functional-510000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m38s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m38s                kubelet          Node functional-510000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m38s                kubelet          Node functional-510000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m38s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m34s                kubelet          Node functional-510000 status is now: NodeReady
	  Normal  RegisteredNode           2m33s                node-controller  Node functional-510000 event: Registered Node functional-510000 in Controller
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node functional-510000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node functional-510000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m6s)  kubelet          Node functional-510000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m                   node-controller  Node functional-510000 event: Registered Node functional-510000 in Controller
	  Normal  Starting                 82s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x8 over 82s)    kubelet          Node functional-510000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 82s)    kubelet          Node functional-510000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x7 over 82s)    kubelet          Node functional-510000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           77s                  node-controller  Node functional-510000 event: Registered Node functional-510000 in Controller
	
	
	==> dmesg <==
	[  +0.055866] kauditd_printk_skb: 33 callbacks suppressed
	[Sep16 17:21] systemd-fstab-generator[5484]: Ignoring "noauto" option for root device
	[  +0.054400] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.111221] systemd-fstab-generator[5518]: Ignoring "noauto" option for root device
	[  +0.090546] systemd-fstab-generator[5530]: Ignoring "noauto" option for root device
	[  +0.122447] systemd-fstab-generator[5544]: Ignoring "noauto" option for root device
	[  +5.101923] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.454985] systemd-fstab-generator[6166]: Ignoring "noauto" option for root device
	[  +0.094006] systemd-fstab-generator[6178]: Ignoring "noauto" option for root device
	[  +0.094001] systemd-fstab-generator[6190]: Ignoring "noauto" option for root device
	[  +0.106058] systemd-fstab-generator[6205]: Ignoring "noauto" option for root device
	[  +0.226938] systemd-fstab-generator[6370]: Ignoring "noauto" option for root device
	[  +1.028226] systemd-fstab-generator[6493]: Ignoring "noauto" option for root device
	[  +3.423985] kauditd_printk_skb: 199 callbacks suppressed
	[  +8.620792] kauditd_printk_skb: 33 callbacks suppressed
	[  +7.969902] systemd-fstab-generator[7541]: Ignoring "noauto" option for root device
	[  +5.059325] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.189769] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.012717] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.199259] kauditd_printk_skb: 4 callbacks suppressed
	[Sep16 17:22] kauditd_printk_skb: 17 callbacks suppressed
	[  +9.273971] kauditd_printk_skb: 23 callbacks suppressed
	[  +7.115891] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.324417] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.579265] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [6194d6dd6728] <==
	{"level":"info","ts":"2024-09-16T17:21:20.705902Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T17:21:20.706954Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T17:21:20.707695Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T17:21:20.707860Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T17:21:20.707761Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-16T17:21:20.708363Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-16T17:21:20.708335Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T17:21:21.999570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T17:21:21.999722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T17:21:21.999794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-16T17:21:21.999826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T17:21:21.999844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-16T17:21:21.999870Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-16T17:21:21.999938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-16T17:21:22.002491Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-510000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T17:21:22.002634Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T17:21:22.003111Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T17:21:22.003168Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T17:21:22.003213Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T17:21:22.005424Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T17:21:22.005424Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T17:21:22.008019Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T17:21:22.009630Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"warn","ts":"2024-09-16T17:22:36.908624Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.002195ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T17:22:36.909730Z","caller":"traceutil/trace.go:171","msg":"trace[530071379] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:867; }","duration":"139.134648ms","start":"2024-09-16T17:22:36.770585Z","end":"2024-09-16T17:22:36.909720Z","steps":["trace[530071379] 'range keys from in-memory index tree'  (duration: 137.961887ms)"],"step_count":1}
	
	
	==> etcd [f7656fa949ee] <==
	{"level":"info","ts":"2024-09-16T17:20:38.403777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T17:20:38.403836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-16T17:20:38.403873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T17:20:38.404280Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-16T17:20:38.404315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T17:20:38.404337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-16T17:20:38.409653Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-510000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T17:20:38.409726Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T17:20:38.410328Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T17:20:38.411948Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T17:20:38.413121Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T17:20:38.413178Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T17:20:38.414438Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T17:20:38.415013Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T17:20:38.416192Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-16T17:21:05.890378Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T17:21:05.890409Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-510000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-16T17:21:05.890454Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T17:21:05.890498Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T17:21:05.904472Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T17:21:05.904495Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T17:21:05.904515Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-16T17:21:05.908237Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-16T17:21:05.908276Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-16T17:21:05.908281Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-510000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 17:22:42 up 2 min,  0 users,  load average: 0.93, 0.52, 0.22
	Linux functional-510000 5.10.207 #1 SMP PREEMPT Mon Sep 16 12:01:57 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [25e56a255101] <==
	I0916 17:21:22.624767       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 17:21:22.625063       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 17:21:22.625105       1 aggregator.go:171] initial CRD sync complete...
	I0916 17:21:22.625129       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 17:21:22.625158       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 17:21:22.625177       1 cache.go:39] Caches are synced for autoregister controller
	I0916 17:21:22.625464       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 17:21:22.627269       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 17:21:22.675099       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 17:21:23.526776       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 17:21:24.240052       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 17:21:24.245003       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 17:21:24.257577       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 17:21:24.267376       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 17:21:24.269787       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 17:21:25.939678       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 17:21:26.344857       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 17:21:44.995931       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.226.155"}
	I0916 17:21:50.186185       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.94.93"}
	I0916 17:22:01.592762       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 17:22:01.639125       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.30.169"}
	I0916 17:22:14.956339       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.95.93"}
	I0916 17:22:31.293741       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 17:22:31.402020       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.71.203"}
	I0916 17:22:31.411689       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.112.80"}
	
	
	==> kube-controller-manager [ae6c11402b8f] <==
	I0916 17:20:42.280099       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0916 17:20:42.280157       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0916 17:20:42.280851       1 shared_informer.go:320] Caches are synced for GC
	I0916 17:20:42.282061       1 shared_informer.go:320] Caches are synced for job
	I0916 17:20:42.286216       1 shared_informer.go:320] Caches are synced for HPA
	I0916 17:20:42.318442       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 17:20:42.318581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="54.66µs"
	I0916 17:20:42.318695       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0916 17:20:42.318889       1 shared_informer.go:320] Caches are synced for service account
	I0916 17:20:42.319029       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 17:20:42.319338       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0916 17:20:42.379192       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0916 17:20:42.418308       1 shared_informer.go:320] Caches are synced for expand
	I0916 17:20:42.468571       1 shared_informer.go:320] Caches are synced for PVC protection
	I0916 17:20:42.468790       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 17:20:42.471061       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 17:20:42.478065       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 17:20:42.488179       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 17:20:42.488213       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 17:20:42.519554       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 17:20:42.523310       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 17:20:42.571979       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 17:20:42.931282       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 17:20:42.983031       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 17:20:42.983058       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [ef2cd9ae7d38] <==
	I0916 17:22:24.327858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-510000"
	I0916 17:22:31.327203       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="16.887415ms"
	E0916 17:22:31.327367       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 17:22:31.334497       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="17.266944ms"
	E0916 17:22:31.334517       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 17:22:31.334557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.08642ms"
	E0916 17:22:31.334562       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 17:22:31.338825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="2.856197ms"
	E0916 17:22:31.338845       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 17:22:31.339159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="3.299335ms"
	E0916 17:22:31.339172       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 17:22:31.350753       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.835187ms"
	I0916 17:22:31.359661       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.858333ms"
	I0916 17:22:31.359769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.425µs"
	I0916 17:22:31.362581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.810354ms"
	I0916 17:22:31.367600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.383µs"
	I0916 17:22:31.384746       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="22.142082ms"
	I0916 17:22:31.385156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="17.799µs"
	I0916 17:22:32.036369       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="23.759µs"
	I0916 17:22:32.117174       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="21.384µs"
	I0916 17:22:34.035967       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="25.885µs"
	I0916 17:22:37.165861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.363788ms"
	I0916 17:22:37.165895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="16.674µs"
	I0916 17:22:39.200268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.01891ms"
	I0916 17:22:39.200585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="39.933µs"
	
	
	==> kube-proxy [2741039616c0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 17:20:40.065372       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 17:20:40.070627       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0916 17:20:40.070677       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 17:20:40.079121       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 17:20:40.079135       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 17:20:40.079147       1 server_linux.go:169] "Using iptables Proxier"
	I0916 17:20:40.079766       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 17:20:40.079861       1 server.go:483] "Version info" version="v1.31.1"
	I0916 17:20:40.079866       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 17:20:40.080483       1 config.go:199] "Starting service config controller"
	I0916 17:20:40.080516       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 17:20:40.080540       1 config.go:105] "Starting endpoint slice config controller"
	I0916 17:20:40.080554       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 17:20:40.080745       1 config.go:328] "Starting node config controller"
	I0916 17:20:40.080769       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 17:20:40.181924       1 shared_informer.go:320] Caches are synced for node config
	I0916 17:20:40.181939       1 shared_informer.go:320] Caches are synced for service config
	I0916 17:20:40.181948       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f3a3c0f3b101] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 17:21:23.609409       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 17:21:23.612830       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0916 17:21:23.612854       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 17:21:23.620289       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 17:21:23.620302       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 17:21:23.620313       1 server_linux.go:169] "Using iptables Proxier"
	I0916 17:21:23.621020       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 17:21:23.621156       1 server.go:483] "Version info" version="v1.31.1"
	I0916 17:21:23.621168       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 17:21:23.621627       1 config.go:199] "Starting service config controller"
	I0916 17:21:23.621639       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 17:21:23.621649       1 config.go:105] "Starting endpoint slice config controller"
	I0916 17:21:23.621662       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 17:21:23.621852       1 config.go:328] "Starting node config controller"
	I0916 17:21:23.621855       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 17:21:23.722754       1 shared_informer.go:320] Caches are synced for service config
	I0916 17:21:23.722852       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 17:21:23.722917       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c3c1f6472605] <==
	I0916 17:20:37.607314       1 serving.go:386] Generated self-signed cert in-memory
	W0916 17:20:38.964821       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 17:20:38.965070       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 17:20:38.965100       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 17:20:38.965119       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 17:20:38.997026       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 17:20:38.997238       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 17:20:38.998214       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 17:20:38.998274       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 17:20:38.998308       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 17:20:38.998333       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 17:20:39.098393       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 17:21:05.897983       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 17:21:05.898038       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0916 17:21:05.898099       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fe448a8445a1] <==
	I0916 17:21:21.402374       1 serving.go:386] Generated self-signed cert in-memory
	I0916 17:21:22.589465       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 17:21:22.589478       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 17:21:22.591214       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 17:21:22.591248       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0916 17:21:22.591257       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0916 17:21:22.591267       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 17:21:22.591852       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 17:21:22.591860       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 17:21:22.591868       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0916 17:21:22.591871       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0916 17:21:22.692335       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0916 17:21:22.692346       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0916 17:21:22.692410       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 17:22:22 functional-510000 kubelet[6500]: I0916 17:22:22.984227    6500 scope.go:117] "RemoveContainer" containerID="5727ceab99441575b663c2b6017dd7dc3f45809395c77d0612df121f6a9b8c42"
	Sep 16 17:22:22 functional-510000 kubelet[6500]: E0916 17:22:22.984300    6500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-h9zb4_default(24091e34-07ae-4857-8c51-9e5f39d97e78)\"" pod="default/hello-node-connect-65d86f57f4-h9zb4" podUID="24091e34-07ae-4857-8c51-9e5f39d97e78"
	Sep 16 17:22:23 functional-510000 kubelet[6500]: I0916 17:22:23.056914    6500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c9bf30c0-873b-4207-980b-9e86d2d1727d-test-volume\") pod \"busybox-mount\" (UID: \"c9bf30c0-873b-4207-980b-9e86d2d1727d\") " pod="default/busybox-mount"
	Sep 16 17:22:23 functional-510000 kubelet[6500]: I0916 17:22:23.057040    6500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddfv4\" (UniqueName: \"kubernetes.io/projected/c9bf30c0-873b-4207-980b-9e86d2d1727d-kube-api-access-ddfv4\") pod \"busybox-mount\" (UID: \"c9bf30c0-873b-4207-980b-9e86d2d1727d\") " pod="default/busybox-mount"
	Sep 16 17:22:26 functional-510000 kubelet[6500]: I0916 17:22:26.314163    6500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c9bf30c0-873b-4207-980b-9e86d2d1727d-test-volume\") pod \"c9bf30c0-873b-4207-980b-9e86d2d1727d\" (UID: \"c9bf30c0-873b-4207-980b-9e86d2d1727d\") "
	Sep 16 17:22:26 functional-510000 kubelet[6500]: I0916 17:22:26.314229    6500 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddfv4\" (UniqueName: \"kubernetes.io/projected/c9bf30c0-873b-4207-980b-9e86d2d1727d-kube-api-access-ddfv4\") pod \"c9bf30c0-873b-4207-980b-9e86d2d1727d\" (UID: \"c9bf30c0-873b-4207-980b-9e86d2d1727d\") "
	Sep 16 17:22:26 functional-510000 kubelet[6500]: I0916 17:22:26.314647    6500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9bf30c0-873b-4207-980b-9e86d2d1727d-test-volume" (OuterVolumeSpecName: "test-volume") pod "c9bf30c0-873b-4207-980b-9e86d2d1727d" (UID: "c9bf30c0-873b-4207-980b-9e86d2d1727d"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 17:22:26 functional-510000 kubelet[6500]: I0916 17:22:26.315877    6500 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9bf30c0-873b-4207-980b-9e86d2d1727d-kube-api-access-ddfv4" (OuterVolumeSpecName: "kube-api-access-ddfv4") pod "c9bf30c0-873b-4207-980b-9e86d2d1727d" (UID: "c9bf30c0-873b-4207-980b-9e86d2d1727d"). InnerVolumeSpecName "kube-api-access-ddfv4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:22:26 functional-510000 kubelet[6500]: I0916 17:22:26.416827    6500 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ddfv4\" (UniqueName: \"kubernetes.io/projected/c9bf30c0-873b-4207-980b-9e86d2d1727d-kube-api-access-ddfv4\") on node \"functional-510000\" DevicePath \"\""
	Sep 16 17:22:26 functional-510000 kubelet[6500]: I0916 17:22:26.417102    6500 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c9bf30c0-873b-4207-980b-9e86d2d1727d-test-volume\") on node \"functional-510000\" DevicePath \"\""
	Sep 16 17:22:27 functional-510000 kubelet[6500]: I0916 17:22:27.057362    6500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="004d68ca7c22049be93b074739152e67018a417bf235713866e1b078b0535161"
	Sep 16 17:22:31 functional-510000 kubelet[6500]: E0916 17:22:31.347625    6500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c9bf30c0-873b-4207-980b-9e86d2d1727d" containerName="mount-munger"
	Sep 16 17:22:31 functional-510000 kubelet[6500]: I0916 17:22:31.347658    6500 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bf30c0-873b-4207-980b-9e86d2d1727d" containerName="mount-munger"
	Sep 16 17:22:31 functional-510000 kubelet[6500]: I0916 17:22:31.364502    6500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d51b62d3-917a-475d-bc83-a76c6e08c839-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-qmd8v\" (UID: \"d51b62d3-917a-475d-bc83-a76c6e08c839\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-qmd8v"
	Sep 16 17:22:31 functional-510000 kubelet[6500]: I0916 17:22:31.364523    6500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqbvh\" (UniqueName: \"kubernetes.io/projected/d51b62d3-917a-475d-bc83-a76c6e08c839-kube-api-access-pqbvh\") pod \"kubernetes-dashboard-695b96c756-qmd8v\" (UID: \"d51b62d3-917a-475d-bc83-a76c6e08c839\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-qmd8v"
	Sep 16 17:22:31 functional-510000 kubelet[6500]: I0916 17:22:31.565402    6500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/da2319ab-38ef-4db3-9d3c-79d5f5255f98-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-mkl7m\" (UID: \"da2319ab-38ef-4db3-9d3c-79d5f5255f98\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-mkl7m"
	Sep 16 17:22:31 functional-510000 kubelet[6500]: I0916 17:22:31.565431    6500 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsvwd\" (UniqueName: \"kubernetes.io/projected/da2319ab-38ef-4db3-9d3c-79d5f5255f98-kube-api-access-jsvwd\") pod \"dashboard-metrics-scraper-c5db448b4-mkl7m\" (UID: \"da2319ab-38ef-4db3-9d3c-79d5f5255f98\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-mkl7m"
	Sep 16 17:22:32 functional-510000 kubelet[6500]: I0916 17:22:32.030479    6500 scope.go:117] "RemoveContainer" containerID="65fd7d9a11388fdf23f5cf913889e0a4516c6ed983f5183f1cb7c18cfcf08ee1"
	Sep 16 17:22:32 functional-510000 kubelet[6500]: I0916 17:22:32.112714    6500 scope.go:117] "RemoveContainer" containerID="65fd7d9a11388fdf23f5cf913889e0a4516c6ed983f5183f1cb7c18cfcf08ee1"
	Sep 16 17:22:32 functional-510000 kubelet[6500]: I0916 17:22:32.113386    6500 scope.go:117] "RemoveContainer" containerID="e96ec2f4bccb70ce40437a1d556a8d09f5219a6e0d5a5183ac3347b93ac21358"
	Sep 16 17:22:32 functional-510000 kubelet[6500]: E0916 17:22:32.116143    6500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-zl5k5_default(2883cf94-374a-429c-a1be-27a83b001bce)\"" pod="default/hello-node-64b4f8f9ff-zl5k5" podUID="2883cf94-374a-429c-a1be-27a83b001bce"
	Sep 16 17:22:34 functional-510000 kubelet[6500]: I0916 17:22:34.030693    6500 scope.go:117] "RemoveContainer" containerID="5727ceab99441575b663c2b6017dd7dc3f45809395c77d0612df121f6a9b8c42"
	Sep 16 17:22:34 functional-510000 kubelet[6500]: E0916 17:22:34.030764    6500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-h9zb4_default(24091e34-07ae-4857-8c51-9e5f39d97e78)\"" pod="default/hello-node-connect-65d86f57f4-h9zb4" podUID="24091e34-07ae-4857-8c51-9e5f39d97e78"
	Sep 16 17:22:37 functional-510000 kubelet[6500]: I0916 17:22:37.161226    6500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-qmd8v" podStartSLOduration=1.214199508 podStartE2EDuration="6.16121411s" podCreationTimestamp="2024-09-16 17:22:31 +0000 UTC" firstStartedPulling="2024-09-16 17:22:31.745048256 +0000 UTC m=+71.784501176" lastFinishedPulling="2024-09-16 17:22:36.692062816 +0000 UTC m=+76.731515778" observedRunningTime="2024-09-16 17:22:37.160968261 +0000 UTC m=+77.200421223" watchObservedRunningTime="2024-09-16 17:22:37.16121411 +0000 UTC m=+77.200667030"
	Sep 16 17:22:39 functional-510000 kubelet[6500]: I0916 17:22:39.197299    6500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-mkl7m" podStartSLOduration=1.5330951449999999 podStartE2EDuration="8.197280229s" podCreationTimestamp="2024-09-16 17:22:31 +0000 UTC" firstStartedPulling="2024-09-16 17:22:32.085799801 +0000 UTC m=+72.125252763" lastFinishedPulling="2024-09-16 17:22:38.749984927 +0000 UTC m=+78.789437847" observedRunningTime="2024-09-16 17:22:39.196948429 +0000 UTC m=+79.236401391" watchObservedRunningTime="2024-09-16 17:22:39.197280229 +0000 UTC m=+79.236733190"
	
	
	==> kubernetes-dashboard [0f898a21a223] <==
	2024/09/16 17:22:37 Using namespace: kubernetes-dashboard
	2024/09/16 17:22:37 Using in-cluster config to connect to apiserver
	2024/09/16 17:22:37 Using secret token for csrf signing
	2024/09/16 17:22:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 17:22:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 17:22:37 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 17:22:37 Generating JWE encryption key
	2024/09/16 17:22:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 17:22:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 17:22:37 Initializing JWE encryption key from synchronized object
	2024/09/16 17:22:37 Creating in-cluster Sidecar client
	2024/09/16 17:22:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 17:22:37 Serving insecurely on HTTP port: 9090
	2024/09/16 17:22:37 Starting overwatch
	
	
	==> storage-provisioner [4244a4a987ce] <==
	I0916 17:20:40.026328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 17:20:40.053079       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 17:20:40.053204       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 17:20:57.467689       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 17:20:57.467766       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-510000_4aa420df-a5c0-4d89-b63e-cc11aad4b838!
	I0916 17:20:57.467893       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"104ebea8-b608-469c-88c8-489e4d09e56d", APIVersion:"v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-510000_4aa420df-a5c0-4d89-b63e-cc11aad4b838 became leader
	I0916 17:20:57.568699       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-510000_4aa420df-a5c0-4d89-b63e-cc11aad4b838!
	
	
	==> storage-provisioner [b84a1b6e5d12] <==
	I0916 17:21:23.528057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 17:21:23.549008       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 17:21:23.577446       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 17:21:40.997111       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 17:21:40.998903       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-510000_6be6260d-69fb-4c76-9a16-d4151089738c!
	I0916 17:21:41.000436       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"104ebea8-b608-469c-88c8-489e4d09e56d", APIVersion:"v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-510000_6be6260d-69fb-4c76-9a16-d4151089738c became leader
	I0916 17:21:41.100280       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-510000_6be6260d-69fb-4c76-9a16-d4151089738c!
	I0916 17:21:55.087468       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0916 17:21:55.087709       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"ccf7dc08-2204-4cf4-b482-d42f09558f94", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0916 17:21:55.087570       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    a3dd13f9-4186-4879-9747-f4bda7a817d7 343 0 2024-09-16 17:20:10 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-16 17:20:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-ccf7dc08-2204-4cf4-b482-d42f09558f94 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  ccf7dc08-2204-4cf4-b482-d42f09558f94 658 0 2024-09-16 17:21:55 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-16 17:21:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-16 17:21:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0916 17:21:55.088153       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-ccf7dc08-2204-4cf4-b482-d42f09558f94" provisioned
	I0916 17:21:55.088222       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0916 17:21:55.088269       1 volume_store.go:212] Trying to save persistentvolume "pvc-ccf7dc08-2204-4cf4-b482-d42f09558f94"
	I0916 17:21:55.093385       1 volume_store.go:219] persistentvolume "pvc-ccf7dc08-2204-4cf4-b482-d42f09558f94" saved
	I0916 17:21:55.093661       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"ccf7dc08-2204-4cf4-b482-d42f09558f94", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-ccf7dc08-2204-4cf4-b482-d42f09558f94
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-510000 -n functional-510000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-510000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-510000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-510000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-510000/192.168.105.4
	Start Time:       Mon, 16 Sep 2024 10:22:23 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://51cf086e5f0570d3184e2008160070c191980a78ca41a1d8fbb6de284f04f09b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 16 Sep 2024 10:22:24 -0700
	      Finished:     Mon, 16 Sep 2024 10:22:24 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ddfv4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-ddfv4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  20s   default-scheduler  Successfully assigned default/busybox-mount to functional-510000
	  Normal  Pulling    20s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     19s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.326s (1.326s including waiting). Image size: 3547125 bytes.
	  Normal  Created    19s   kubelet            Created container mount-munger
	  Normal  Started    19s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (41.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (115.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 node stop m02 -v=7 --alsologtostderr
E0916 10:27:10.400187    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-094000 node stop m02 -v=7 --alsologtostderr: (12.181892s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr
E0916 10:27:30.882564    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:27:31.841280    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:28:11.845092    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr: exit status 7 (1m17.807716667s)

                                                
                                                
-- stdout --
	ha-094000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-094000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-094000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-094000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:27:19.945870    2665 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:27:19.946062    2665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:27:19.946065    2665 out.go:358] Setting ErrFile to fd 2...
	I0916 10:27:19.946067    2665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:27:19.946222    2665 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:27:19.946341    2665 out.go:352] Setting JSON to false
	I0916 10:27:19.946351    2665 mustload.go:65] Loading cluster: ha-094000
	I0916 10:27:19.946418    2665 notify.go:220] Checking for updates...
	I0916 10:27:19.946600    2665 config.go:182] Loaded profile config "ha-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:27:19.946606    2665 status.go:255] checking status of ha-094000 ...
	I0916 10:27:19.947286    2665 status.go:330] ha-094000 host status = "Running" (err=<nil>)
	I0916 10:27:19.947292    2665 host.go:66] Checking if "ha-094000" exists ...
	I0916 10:27:19.947385    2665 host.go:66] Checking if "ha-094000" exists ...
	I0916 10:27:19.947495    2665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:27:19.947502    2665 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/id_rsa Username:docker}
	W0916 10:27:45.872251    2665 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0916 10:27:45.872383    2665 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0916 10:27:45.872416    2665 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0916 10:27:45.872426    2665 status.go:257] ha-094000 status: &{Name:ha-094000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 10:27:45.872445    2665 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0916 10:27:45.872454    2665 status.go:255] checking status of ha-094000-m02 ...
	I0916 10:27:45.872892    2665 status.go:330] ha-094000-m02 host status = "Stopped" (err=<nil>)
	I0916 10:27:45.872902    2665 status.go:343] host is not running, skipping remaining checks
	I0916 10:27:45.872908    2665 status.go:257] ha-094000-m02 status: &{Name:ha-094000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:27:45.872919    2665 status.go:255] checking status of ha-094000-m03 ...
	I0916 10:27:45.874122    2665 status.go:330] ha-094000-m03 host status = "Running" (err=<nil>)
	I0916 10:27:45.874133    2665 host.go:66] Checking if "ha-094000-m03" exists ...
	I0916 10:27:45.874366    2665 host.go:66] Checking if "ha-094000-m03" exists ...
	I0916 10:27:45.874634    2665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:27:45.874647    2665 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m03/id_rsa Username:docker}
	W0916 10:28:11.797562    2665 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0916 10:28:11.797608    2665 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0916 10:28:11.797617    2665 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0916 10:28:11.797622    2665 status.go:257] ha-094000-m03 status: &{Name:ha-094000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 10:28:11.797632    2665 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0916 10:28:11.797636    2665 status.go:255] checking status of ha-094000-m04 ...
	I0916 10:28:11.798371    2665 status.go:330] ha-094000-m04 host status = "Running" (err=<nil>)
	I0916 10:28:11.798380    2665 host.go:66] Checking if "ha-094000-m04" exists ...
	I0916 10:28:11.798483    2665 host.go:66] Checking if "ha-094000-m04" exists ...
	I0916 10:28:11.798608    2665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:28:11.798614    2665 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m04/id_rsa Username:docker}
	W0916 10:28:37.719905    2665 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0916 10:28:37.719952    2665 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0916 10:28:37.719962    2665 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0916 10:28:37.719966    2665 status.go:257] ha-094000-m04 status: &{Name:ha-094000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0916 10:28:37.719974    2665 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr": ha-094000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-094000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-094000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-094000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr": ha-094000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-094000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-094000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-094000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr": ha-094000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-094000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-094000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-094000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000: exit status 3 (25.959276708s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 10:29:03.679007    2678 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0916 10:29:03.679015    2678 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-094000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (115.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (77.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0916 10:29:33.766667    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (51.895778458s)
ha_test.go:413: expected profile "ha-094000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-094000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-094000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-094000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000: exit status 3 (25.960378375s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 10:30:21.533623    2694 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0916 10:30:21.533634    2694 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-094000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (77.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (110.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-094000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.087280041s)

                                                
                                                
-- stdout --
	* Starting "ha-094000-m02" control-plane node in "ha-094000" cluster
	* Restarting existing qemu2 VM for "ha-094000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-094000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:30:21.567201    3020 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:30:21.567441    3020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:30:21.567445    3020 out.go:358] Setting ErrFile to fd 2...
	I0916 10:30:21.567447    3020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:30:21.567597    3020 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:30:21.567857    3020 mustload.go:65] Loading cluster: ha-094000
	I0916 10:30:21.568099    3020 config.go:182] Loaded profile config "ha-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0916 10:30:21.568310    3020 host.go:58] "ha-094000-m02" host status: Stopped
	I0916 10:30:21.572931    3020 out.go:177] * Starting "ha-094000-m02" control-plane node in "ha-094000" cluster
	I0916 10:30:21.576876    3020 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:30:21.576891    3020 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:30:21.576902    3020 cache.go:56] Caching tarball of preloaded images
	I0916 10:30:21.576978    3020 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:30:21.576985    3020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:30:21.577042    3020 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/ha-094000/config.json ...
	I0916 10:30:21.577373    3020 start.go:360] acquireMachinesLock for ha-094000-m02: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:30:21.577431    3020 start.go:364] duration metric: took 26.916µs to acquireMachinesLock for "ha-094000-m02"
	I0916 10:30:21.577439    3020 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:30:21.577444    3020 fix.go:54] fixHost starting: m02
	I0916 10:30:21.577549    3020 fix.go:112] recreateIfNeeded on ha-094000-m02: state=Stopped err=<nil>
	W0916 10:30:21.577554    3020 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:30:21.581912    3020 out.go:177] * Restarting existing qemu2 VM for "ha-094000-m02" ...
	I0916 10:30:21.585842    3020 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:30:21.585893    3020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:5d:13:bd:be:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/disk.qcow2
	I0916 10:30:21.588221    3020 main.go:141] libmachine: STDOUT: 
	I0916 10:30:21.588238    3020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:30:21.588259    3020 fix.go:56] duration metric: took 10.815209ms for fixHost
	I0916 10:30:21.588267    3020 start.go:83] releasing machines lock for "ha-094000-m02", held for 10.826625ms
	W0916 10:30:21.588273    3020 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:30:21.588306    3020 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:30:21.588310    3020 start.go:729] Will try again in 5 seconds ...
	I0916 10:30:26.590228    3020 start.go:360] acquireMachinesLock for ha-094000-m02: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:30:26.590346    3020 start.go:364] duration metric: took 103.583µs to acquireMachinesLock for "ha-094000-m02"
	I0916 10:30:26.590379    3020 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:30:26.590383    3020 fix.go:54] fixHost starting: m02
	I0916 10:30:26.590536    3020 fix.go:112] recreateIfNeeded on ha-094000-m02: state=Stopped err=<nil>
	W0916 10:30:26.590545    3020 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:30:26.594323    3020 out.go:177] * Restarting existing qemu2 VM for "ha-094000-m02" ...
	I0916 10:30:26.598325    3020 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:30:26.598377    3020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:5d:13:bd:be:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/disk.qcow2
	I0916 10:30:26.600663    3020 main.go:141] libmachine: STDOUT: 
	I0916 10:30:26.600679    3020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:30:26.600698    3020 fix.go:56] duration metric: took 10.314834ms for fixHost
	I0916 10:30:26.600701    3020 start.go:83] releasing machines lock for "ha-094000-m02", held for 10.347167ms
	W0916 10:30:26.600759    3020 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-094000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-094000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:30:26.607310    3020 out.go:201] 
	W0916 10:30:26.611381    3020 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:30:26.611386    3020 out.go:270] * 
	* 
	W0916 10:30:26.613055    3020 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:30:26.617247    3020 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0916 10:30:21.567201    3020 out.go:345] Setting OutFile to fd 1 ...
I0916 10:30:21.567441    3020 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:30:21.567445    3020 out.go:358] Setting ErrFile to fd 2...
I0916 10:30:21.567447    3020 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:30:21.567597    3020 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
I0916 10:30:21.567857    3020 mustload.go:65] Loading cluster: ha-094000
I0916 10:30:21.568099    3020 config.go:182] Loaded profile config "ha-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0916 10:30:21.568310    3020 host.go:58] "ha-094000-m02" host status: Stopped
I0916 10:30:21.572931    3020 out.go:177] * Starting "ha-094000-m02" control-plane node in "ha-094000" cluster
I0916 10:30:21.576876    3020 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0916 10:30:21.576891    3020 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0916 10:30:21.576902    3020 cache.go:56] Caching tarball of preloaded images
I0916 10:30:21.576978    3020 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0916 10:30:21.576985    3020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0916 10:30:21.577042    3020 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/ha-094000/config.json ...
I0916 10:30:21.577373    3020 start.go:360] acquireMachinesLock for ha-094000-m02: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0916 10:30:21.577431    3020 start.go:364] duration metric: took 26.916µs to acquireMachinesLock for "ha-094000-m02"
I0916 10:30:21.577439    3020 start.go:96] Skipping create...Using existing machine configuration
I0916 10:30:21.577444    3020 fix.go:54] fixHost starting: m02
I0916 10:30:21.577549    3020 fix.go:112] recreateIfNeeded on ha-094000-m02: state=Stopped err=<nil>
W0916 10:30:21.577554    3020 fix.go:138] unexpected machine state, will restart: <nil>
I0916 10:30:21.581912    3020 out.go:177] * Restarting existing qemu2 VM for "ha-094000-m02" ...
I0916 10:30:21.585842    3020 qemu.go:418] Using hvf for hardware acceleration
I0916 10:30:21.585893    3020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:5d:13:bd:be:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/disk.qcow2
I0916 10:30:21.588221    3020 main.go:141] libmachine: STDOUT: 
I0916 10:30:21.588238    3020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0916 10:30:21.588259    3020 fix.go:56] duration metric: took 10.815209ms for fixHost
I0916 10:30:21.588267    3020 start.go:83] releasing machines lock for "ha-094000-m02", held for 10.826625ms
W0916 10:30:21.588273    3020 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0916 10:30:21.588306    3020 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0916 10:30:21.588310    3020 start.go:729] Will try again in 5 seconds ...
I0916 10:30:26.590228    3020 start.go:360] acquireMachinesLock for ha-094000-m02: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0916 10:30:26.590346    3020 start.go:364] duration metric: took 103.583µs to acquireMachinesLock for "ha-094000-m02"
I0916 10:30:26.590379    3020 start.go:96] Skipping create...Using existing machine configuration
I0916 10:30:26.590383    3020 fix.go:54] fixHost starting: m02
I0916 10:30:26.590536    3020 fix.go:112] recreateIfNeeded on ha-094000-m02: state=Stopped err=<nil>
W0916 10:30:26.590545    3020 fix.go:138] unexpected machine state, will restart: <nil>
I0916 10:30:26.594323    3020 out.go:177] * Restarting existing qemu2 VM for "ha-094000-m02" ...
I0916 10:30:26.598325    3020 qemu.go:418] Using hvf for hardware acceleration
I0916 10:30:26.598377    3020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:5d:13:bd:be:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m02/disk.qcow2
I0916 10:30:26.600663    3020 main.go:141] libmachine: STDOUT: 
I0916 10:30:26.600679    3020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0916 10:30:26.600698    3020 fix.go:56] duration metric: took 10.314834ms for fixHost
I0916 10:30:26.600701    3020 start.go:83] releasing machines lock for "ha-094000-m02", held for 10.347167ms
W0916 10:30:26.600759    3020 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-094000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-094000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0916 10:30:26.607310    3020 out.go:201] 
W0916 10:30:26.611381    3020 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0916 10:30:26.611386    3020 out.go:270] * 
* 
W0916 10:30:26.613055    3020 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0916 10:30:26.617247    3020 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-094000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr: exit status 7 (1m19.619437333s)

                                                
                                                
-- stdout --
	ha-094000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-094000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-094000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-094000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:30:26.654644    3024 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:30:26.654826    3024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:30:26.654833    3024 out.go:358] Setting ErrFile to fd 2...
	I0916 10:30:26.654835    3024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:30:26.654996    3024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:30:26.655123    3024 out.go:352] Setting JSON to false
	I0916 10:30:26.655138    3024 mustload.go:65] Loading cluster: ha-094000
	I0916 10:30:26.655171    3024 notify.go:220] Checking for updates...
	I0916 10:30:26.655430    3024 config.go:182] Loaded profile config "ha-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:30:26.655437    3024 status.go:255] checking status of ha-094000 ...
	I0916 10:30:26.656223    3024 status.go:330] ha-094000 host status = "Running" (err=<nil>)
	I0916 10:30:26.656229    3024 host.go:66] Checking if "ha-094000" exists ...
	I0916 10:30:26.656317    3024 host.go:66] Checking if "ha-094000" exists ...
	I0916 10:30:26.656434    3024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:30:26.656444    3024 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/id_rsa Username:docker}
	W0916 10:30:26.656626    3024 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0916 10:30:26.656642    3024 retry.go:31] will retry after 351.467188ms: dial tcp 192.168.105.5:22: connect: host is down
	W0916 10:30:27.010474    3024 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0916 10:30:27.010496    3024 retry.go:31] will retry after 450.001422ms: dial tcp 192.168.105.5:22: connect: host is down
	W0916 10:30:27.462664    3024 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0916 10:30:27.462687    3024 retry.go:31] will retry after 672.479191ms: dial tcp 192.168.105.5:22: connect: host is down
	W0916 10:30:28.137318    3024 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0916 10:30:28.137374    3024 retry.go:31] will retry after 331.940659ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0916 10:30:28.471385    3024 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/id_rsa Username:docker}
	W0916 10:30:54.395388    3024 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0916 10:30:54.395469    3024 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0916 10:30:54.395478    3024 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0916 10:30:54.395483    3024 status.go:257] ha-094000 status: &{Name:ha-094000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 10:30:54.395496    3024 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0916 10:30:54.395500    3024 status.go:255] checking status of ha-094000-m02 ...
	I0916 10:30:54.395731    3024 status.go:330] ha-094000-m02 host status = "Stopped" (err=<nil>)
	I0916 10:30:54.395738    3024 status.go:343] host is not running, skipping remaining checks
	I0916 10:30:54.395740    3024 status.go:257] ha-094000-m02 status: &{Name:ha-094000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:30:54.395745    3024 status.go:255] checking status of ha-094000-m03 ...
	I0916 10:30:54.396505    3024 status.go:330] ha-094000-m03 host status = "Running" (err=<nil>)
	I0916 10:30:54.396514    3024 host.go:66] Checking if "ha-094000-m03" exists ...
	I0916 10:30:54.396631    3024 host.go:66] Checking if "ha-094000-m03" exists ...
	I0916 10:30:54.396764    3024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:30:54.396773    3024 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m03/id_rsa Username:docker}
	W0916 10:31:20.315364    3024 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0916 10:31:20.315407    3024 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0916 10:31:20.315415    3024 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0916 10:31:20.315418    3024 status.go:257] ha-094000-m03 status: &{Name:ha-094000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 10:31:20.315440    3024 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0916 10:31:20.315445    3024 status.go:255] checking status of ha-094000-m04 ...
	I0916 10:31:20.316122    3024 status.go:330] ha-094000-m04 host status = "Running" (err=<nil>)
	I0916 10:31:20.316128    3024 host.go:66] Checking if "ha-094000-m04" exists ...
	I0916 10:31:20.316226    3024 host.go:66] Checking if "ha-094000-m04" exists ...
	I0916 10:31:20.316350    3024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:31:20.316356    3024 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000-m04/id_rsa Username:docker}
	W0916 10:31:46.238665    3024 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0916 10:31:46.238710    3024 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0916 10:31:46.238717    3024 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0916 10:31:46.238720    3024 status.go:257] ha-094000-m04 status: &{Name:ha-094000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0916 10:31:46.238729    3024 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000
E0916 10:31:49.884641    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:32:04.107703    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000: exit status 3 (25.954397417s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 10:32:12.192742    3059 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0916 10:32:12.192756    3059 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-094000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (110.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-094000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-094000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-094000 -v=7 --alsologtostderr: (2m10.871762708s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-094000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-094000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.224489834s)

                                                
                                                
-- stdout --
	* [ha-094000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-094000" primary control-plane node in "ha-094000" cluster
	* Restarting existing qemu2 VM for "ha-094000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-094000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:34:52.635814    3145 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:34:52.636037    3145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:34:52.636041    3145 out.go:358] Setting ErrFile to fd 2...
	I0916 10:34:52.636044    3145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:34:52.636200    3145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:34:52.637460    3145 out.go:352] Setting JSON to false
	I0916 10:34:52.657043    3145 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2056,"bootTime":1726506036,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:34:52.657117    3145 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:34:52.662268    3145 out.go:177] * [ha-094000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:34:52.670252    3145 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:34:52.670312    3145 notify.go:220] Checking for updates...
	I0916 10:34:52.676122    3145 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:34:52.679165    3145 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:34:52.682240    3145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:34:52.683622    3145 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:34:52.686232    3145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:34:52.689562    3145 config.go:182] Loaded profile config "ha-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:34:52.689612    3145 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:34:52.694070    3145 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:34:52.701209    3145 start.go:297] selected driver: qemu2
	I0916 10:34:52.701217    3145 start.go:901] validating driver "qemu2" against &{Name:ha-094000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-094000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:34:52.701311    3145 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:34:52.703846    3145 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:34:52.703872    3145 cni.go:84] Creating CNI manager for ""
	I0916 10:34:52.703902    3145 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 10:34:52.703958    3145 start.go:340] cluster config:
	{Name:ha-094000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-094000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:34:52.707825    3145 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:34:52.716198    3145 out.go:177] * Starting "ha-094000" primary control-plane node in "ha-094000" cluster
	I0916 10:34:52.720232    3145 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:34:52.720253    3145 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:34:52.720265    3145 cache.go:56] Caching tarball of preloaded images
	I0916 10:34:52.720340    3145 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:34:52.720350    3145 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:34:52.720418    3145 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/ha-094000/config.json ...
	I0916 10:34:52.720876    3145 start.go:360] acquireMachinesLock for ha-094000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:34:52.720917    3145 start.go:364] duration metric: took 33.916µs to acquireMachinesLock for "ha-094000"
	I0916 10:34:52.720925    3145 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:34:52.720930    3145 fix.go:54] fixHost starting: 
	I0916 10:34:52.721055    3145 fix.go:112] recreateIfNeeded on ha-094000: state=Stopped err=<nil>
	W0916 10:34:52.721064    3145 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:34:52.725209    3145 out.go:177] * Restarting existing qemu2 VM for "ha-094000" ...
	I0916 10:34:52.733179    3145 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:34:52.733217    3145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:13:f3:5d:bf:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/disk.qcow2
	I0916 10:34:52.735311    3145 main.go:141] libmachine: STDOUT: 
	I0916 10:34:52.735328    3145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:34:52.735366    3145 fix.go:56] duration metric: took 14.4355ms for fixHost
	I0916 10:34:52.735372    3145 start.go:83] releasing machines lock for "ha-094000", held for 14.451375ms
	W0916 10:34:52.735378    3145 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:34:52.735410    3145 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:34:52.735415    3145 start.go:729] Will try again in 5 seconds ...
	I0916 10:34:57.737573    3145 start.go:360] acquireMachinesLock for ha-094000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:34:57.738054    3145 start.go:364] duration metric: took 348.458µs to acquireMachinesLock for "ha-094000"
	I0916 10:34:57.738190    3145 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:34:57.738212    3145 fix.go:54] fixHost starting: 
	I0916 10:34:57.738969    3145 fix.go:112] recreateIfNeeded on ha-094000: state=Stopped err=<nil>
	W0916 10:34:57.738995    3145 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:34:57.743609    3145 out.go:177] * Restarting existing qemu2 VM for "ha-094000" ...
	I0916 10:34:57.751482    3145 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:34:57.751785    3145 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:13:f3:5d:bf:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/disk.qcow2
	I0916 10:34:57.761460    3145 main.go:141] libmachine: STDOUT: 
	I0916 10:34:57.761515    3145 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:34:57.761602    3145 fix.go:56] duration metric: took 23.39175ms for fixHost
	I0916 10:34:57.761623    3145 start.go:83] releasing machines lock for "ha-094000", held for 23.545583ms
	W0916 10:34:57.761801    3145 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-094000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-094000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:34:57.769425    3145 out.go:201] 
	W0916 10:34:57.773570    3145 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:34:57.773607    3145 out.go:270] * 
	* 
	W0916 10:34:57.776371    3145 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:34:57.782508    3145 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-094000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-094000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000: exit status 7 (33.001334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-094000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-094000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.452583ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-094000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-094000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:34:57.926557    3160 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:34:57.926799    3160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:34:57.926802    3160 out.go:358] Setting ErrFile to fd 2...
	I0916 10:34:57.926805    3160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:34:57.926936    3160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:34:57.927162    3160 mustload.go:65] Loading cluster: ha-094000
	I0916 10:34:57.927432    3160 config.go:182] Loaded profile config "ha-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0916 10:34:57.927734    3160 out.go:270] ! The control-plane node ha-094000 host is not running (will try others): state=Stopped
	! The control-plane node ha-094000 host is not running (will try others): state=Stopped
	W0916 10:34:57.927854    3160 out.go:270] ! The control-plane node ha-094000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-094000-m02 host is not running (will try others): state=Stopped
	I0916 10:34:57.931941    3160 out.go:177] * The control-plane node ha-094000-m03 host is not running: state=Stopped
	I0916 10:34:57.934901    3160 out.go:177]   To start a cluster, run: "minikube start -p ha-094000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-094000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr: exit status 7 (30.115542ms)

                                                
                                                
-- stdout --
	ha-094000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-094000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-094000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-094000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:34:57.966903    3162 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:34:57.967065    3162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:34:57.967069    3162 out.go:358] Setting ErrFile to fd 2...
	I0916 10:34:57.967071    3162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:34:57.967187    3162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:34:57.967300    3162 out.go:352] Setting JSON to false
	I0916 10:34:57.967309    3162 mustload.go:65] Loading cluster: ha-094000
	I0916 10:34:57.967361    3162 notify.go:220] Checking for updates...
	I0916 10:34:57.967538    3162 config.go:182] Loaded profile config "ha-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:34:57.967544    3162 status.go:255] checking status of ha-094000 ...
	I0916 10:34:57.967787    3162 status.go:330] ha-094000 host status = "Stopped" (err=<nil>)
	I0916 10:34:57.967791    3162 status.go:343] host is not running, skipping remaining checks
	I0916 10:34:57.967793    3162 status.go:257] ha-094000 status: &{Name:ha-094000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:34:57.967803    3162 status.go:255] checking status of ha-094000-m02 ...
	I0916 10:34:57.967893    3162 status.go:330] ha-094000-m02 host status = "Stopped" (err=<nil>)
	I0916 10:34:57.967896    3162 status.go:343] host is not running, skipping remaining checks
	I0916 10:34:57.967897    3162 status.go:257] ha-094000-m02 status: &{Name:ha-094000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:34:57.967901    3162 status.go:255] checking status of ha-094000-m03 ...
	I0916 10:34:57.967984    3162 status.go:330] ha-094000-m03 host status = "Stopped" (err=<nil>)
	I0916 10:34:57.967986    3162 status.go:343] host is not running, skipping remaining checks
	I0916 10:34:57.967987    3162 status.go:257] ha-094000-m03 status: &{Name:ha-094000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:34:57.967991    3162 status.go:255] checking status of ha-094000-m04 ...
	I0916 10:34:57.968085    3162 status.go:330] ha-094000-m04 host status = "Stopped" (err=<nil>)
	I0916 10:34:57.968088    3162 status.go:343] host is not running, skipping remaining checks
	I0916 10:34:57.968090    3162 status.go:257] ha-094000-m04 status: &{Name:ha-094000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000: exit status 7 (29.74875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-094000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-094000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-094000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-094000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-094000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000: exit status 7 (30.298625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-094000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (103.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-094000 stop -v=7 --alsologtostderr: (1m43.827574084s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr: exit status 7 (66.268875ms)

                                                
                                                
-- stdout --
	ha-094000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-094000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-094000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-094000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:36:41.965179    3186 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:36:41.965394    3186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:41.965402    3186 out.go:358] Setting ErrFile to fd 2...
	I0916 10:36:41.965405    3186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:41.965594    3186 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:36:41.965757    3186 out.go:352] Setting JSON to false
	I0916 10:36:41.965769    3186 mustload.go:65] Loading cluster: ha-094000
	I0916 10:36:41.965820    3186 notify.go:220] Checking for updates...
	I0916 10:36:41.966116    3186 config.go:182] Loaded profile config "ha-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:36:41.966124    3186 status.go:255] checking status of ha-094000 ...
	I0916 10:36:41.966477    3186 status.go:330] ha-094000 host status = "Stopped" (err=<nil>)
	I0916 10:36:41.966482    3186 status.go:343] host is not running, skipping remaining checks
	I0916 10:36:41.966485    3186 status.go:257] ha-094000 status: &{Name:ha-094000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:36:41.966497    3186 status.go:255] checking status of ha-094000-m02 ...
	I0916 10:36:41.966624    3186 status.go:330] ha-094000-m02 host status = "Stopped" (err=<nil>)
	I0916 10:36:41.966628    3186 status.go:343] host is not running, skipping remaining checks
	I0916 10:36:41.966631    3186 status.go:257] ha-094000-m02 status: &{Name:ha-094000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:36:41.966636    3186 status.go:255] checking status of ha-094000-m03 ...
	I0916 10:36:41.966767    3186 status.go:330] ha-094000-m03 host status = "Stopped" (err=<nil>)
	I0916 10:36:41.966771    3186 status.go:343] host is not running, skipping remaining checks
	I0916 10:36:41.966773    3186 status.go:257] ha-094000-m03 status: &{Name:ha-094000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:36:41.966778    3186 status.go:255] checking status of ha-094000-m04 ...
	I0916 10:36:41.966901    3186 status.go:330] ha-094000-m04 host status = "Stopped" (err=<nil>)
	I0916 10:36:41.966905    3186 status.go:343] host is not running, skipping remaining checks
	I0916 10:36:41.966907    3186 status.go:257] ha-094000-m04 status: &{Name:ha-094000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr": ha-094000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-094000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-094000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-094000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr": ha-094000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-094000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-094000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-094000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr": ha-094000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-094000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-094000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-094000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000: exit status 7 (31.776458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-094000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (103.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-094000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-094000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.182639916s)

                                                
                                                
-- stdout --
	* [ha-094000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-094000" primary control-plane node in "ha-094000" cluster
	* Restarting existing qemu2 VM for "ha-094000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-094000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:36:42.027844    3190 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:36:42.027963    3190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:42.027967    3190 out.go:358] Setting ErrFile to fd 2...
	I0916 10:36:42.027969    3190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:42.028101    3190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:36:42.029136    3190 out.go:352] Setting JSON to false
	I0916 10:36:42.045167    3190 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2166,"bootTime":1726506036,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:36:42.045232    3190 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:36:42.050003    3190 out.go:177] * [ha-094000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:36:42.056876    3190 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:36:42.056926    3190 notify.go:220] Checking for updates...
	I0916 10:36:42.063913    3190 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:36:42.066830    3190 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:36:42.069902    3190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:36:42.072900    3190 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:36:42.075878    3190 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:36:42.079118    3190 config.go:182] Loaded profile config "ha-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:36:42.079375    3190 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:36:42.083932    3190 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:36:42.090839    3190 start.go:297] selected driver: qemu2
	I0916 10:36:42.090845    3190 start.go:901] validating driver "qemu2" against &{Name:ha-094000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-094000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:36:42.090923    3190 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:36:42.093084    3190 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:36:42.093107    3190 cni.go:84] Creating CNI manager for ""
	I0916 10:36:42.093129    3190 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 10:36:42.093169    3190 start.go:340] cluster config:
	{Name:ha-094000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-094000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:36:42.096546    3190 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:36:42.104860    3190 out.go:177] * Starting "ha-094000" primary control-plane node in "ha-094000" cluster
	I0916 10:36:42.108904    3190 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:36:42.108919    3190 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:36:42.108931    3190 cache.go:56] Caching tarball of preloaded images
	I0916 10:36:42.109002    3190 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:36:42.109008    3190 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:36:42.109082    3190 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/ha-094000/config.json ...
	I0916 10:36:42.109522    3190 start.go:360] acquireMachinesLock for ha-094000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:36:42.109555    3190 start.go:364] duration metric: took 27.209µs to acquireMachinesLock for "ha-094000"
	I0916 10:36:42.109563    3190 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:36:42.109569    3190 fix.go:54] fixHost starting: 
	I0916 10:36:42.109685    3190 fix.go:112] recreateIfNeeded on ha-094000: state=Stopped err=<nil>
	W0916 10:36:42.109693    3190 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:36:42.112939    3190 out.go:177] * Restarting existing qemu2 VM for "ha-094000" ...
	I0916 10:36:42.120712    3190 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:36:42.120750    3190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:13:f3:5d:bf:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/disk.qcow2
	I0916 10:36:42.122687    3190 main.go:141] libmachine: STDOUT: 
	I0916 10:36:42.122708    3190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:36:42.122740    3190 fix.go:56] duration metric: took 13.171709ms for fixHost
	I0916 10:36:42.122747    3190 start.go:83] releasing machines lock for "ha-094000", held for 13.187584ms
	W0916 10:36:42.122752    3190 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:36:42.122795    3190 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:36:42.122799    3190 start.go:729] Will try again in 5 seconds ...
	I0916 10:36:47.124974    3190 start.go:360] acquireMachinesLock for ha-094000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:36:47.125512    3190 start.go:364] duration metric: took 414.666µs to acquireMachinesLock for "ha-094000"
	I0916 10:36:47.125665    3190 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:36:47.125695    3190 fix.go:54] fixHost starting: 
	I0916 10:36:47.126476    3190 fix.go:112] recreateIfNeeded on ha-094000: state=Stopped err=<nil>
	W0916 10:36:47.126504    3190 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:36:47.131121    3190 out.go:177] * Restarting existing qemu2 VM for "ha-094000" ...
	I0916 10:36:47.139921    3190 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:36:47.140158    3190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:13:f3:5d:bf:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/ha-094000/disk.qcow2
	I0916 10:36:47.150029    3190 main.go:141] libmachine: STDOUT: 
	I0916 10:36:47.150095    3190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:36:47.150221    3190 fix.go:56] duration metric: took 24.5315ms for fixHost
	I0916 10:36:47.150242    3190 start.go:83] releasing machines lock for "ha-094000", held for 24.705041ms
	W0916 10:36:47.150468    3190 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-094000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-094000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:36:47.156941    3190 out.go:201] 
	W0916 10:36:47.161029    3190 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:36:47.161061    3190 out.go:270] * 
	* 
	W0916 10:36:47.163743    3190 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:36:47.174935    3190 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-094000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000: exit status 7 (67.846917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-094000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-094000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-094000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-094000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-094000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000: exit status 7 (29.685958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-094000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-094000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-094000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.74775ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-094000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-094000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:36:47.363129    3210 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:36:47.363283    3210 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:47.363286    3210 out.go:358] Setting ErrFile to fd 2...
	I0916 10:36:47.363288    3210 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:36:47.363402    3210 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:36:47.363653    3210 mustload.go:65] Loading cluster: ha-094000
	I0916 10:36:47.363918    3210 config.go:182] Loaded profile config "ha-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0916 10:36:47.364246    3210 out.go:270] ! The control-plane node ha-094000 host is not running (will try others): state=Stopped
	! The control-plane node ha-094000 host is not running (will try others): state=Stopped
	W0916 10:36:47.364357    3210 out.go:270] ! The control-plane node ha-094000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-094000-m02 host is not running (will try others): state=Stopped
	I0916 10:36:47.368235    3210 out.go:177] * The control-plane node ha-094000-m03 host is not running: state=Stopped
	I0916 10:36:47.372158    3210 out.go:177]   To start a cluster, run: "minikube start -p ha-094000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-094000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-094000 -n ha-094000: exit status 7 (29.87075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-094000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-254000 --driver=qemu2 
E0916 10:36:49.878001    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-254000 --driver=qemu2 : exit status 80 (10.148668875s)

                                                
                                                
-- stdout --
	* [image-254000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-254000" primary control-plane node in "image-254000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-254000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-254000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-254000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-254000 -n image-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-254000 -n image-254000: exit status 7 (69.652167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-254000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-755000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0916 10:37:04.101119    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-755000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.968816208s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"13e4f98b-4d62-4a4f-9e79-81e8c2dbceee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-755000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8f55e4e-0128-4ecd-95fd-2102792cef12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19649"}}
	{"specversion":"1.0","id":"ca3e7343-3cd8-47c4-a402-cd9131796399","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig"}}
	{"specversion":"1.0","id":"1c287b2f-38a9-467f-8fcb-216e4c94c3e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"bfd381d1-04f3-49f4-bb6b-00a285248b6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cac6a483-3c53-4a16-a8ab-76e2db94d4a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube"}}
	{"specversion":"1.0","id":"08538084-923a-49ae-8e29-046446afcf55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"33af4a3a-4e64-4e47-82a5-8b9f4a00ad96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"88fbf98b-0af5-4f5e-8943-1f0c91dbbba2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"5ddb4f19-3245-47cf-8121-ac211afd6b81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-755000\" primary control-plane node in \"json-output-755000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"88f4bbe9-f8a3-49f6-ad72-e8eeeda1c501","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"0a1147e4-fcd8-4a51-b039-848a4b58e51f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-755000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"9103e204-7e41-496e-990e-654718bc9e50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"f30d5c6a-c32a-43a9-a8ca-466d01874ce7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"3061da02-52ca-4164-8864-60572dfc4eba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-755000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"f2c8d069-62fd-4773-8952-ec59310e58ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"6010cfd0-06d9-427e-93d3-5b9f44ae6b30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-755000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.97s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-755000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-755000 --output=json --user=testUser: exit status 83 (80.110125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"96d49b52-bded-4c3c-93a8-f0bba05f3cbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-755000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"08f337bc-f272-4c13-a9f4-6e531d3a4819","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-755000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-755000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.06s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-755000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-755000 --output=json --user=testUser: exit status 83 (58.37525ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-755000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-755000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-755000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-755000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (10.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-314000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-314000 --driver=qemu2 : exit status 80 (9.904653959s)

                                                
                                                
-- stdout --
	* [first-314000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-314000" primary control-plane node in "first-314000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-314000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-314000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-314000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-16 10:37:22.028323 -0700 PDT m=+1980.168683085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-316000 -n second-316000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-316000 -n second-316000: exit status 85 (81.190334ms)

                                                
                                                
-- stdout --
	* Profile "second-316000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-316000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-316000" host is not running, skipping log retrieval (state="* Profile \"second-316000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-316000\"")
helpers_test.go:175: Cleaning up "second-316000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-316000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-16 10:37:22.22047 -0700 PDT m=+1980.360834793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-314000 -n first-314000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-314000 -n first-314000: exit status 7 (29.144333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-314000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-314000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-314000
--- FAIL: TestMinikubeProfile (10.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-586000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-586000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.021265708s)

                                                
                                                
-- stdout --
	* [mount-start-1-586000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-586000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-586000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-586000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-586000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-586000 -n mount-start-1-586000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-586000 -n mount-start-1-586000: exit status 7 (68.229584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-586000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-416000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-416000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.947585917s)

                                                
                                                
-- stdout --
	* [multinode-416000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-416000" primary control-plane node in "multinode-416000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-416000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:37:32.627438    3357 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:37:32.627556    3357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:37:32.627559    3357 out.go:358] Setting ErrFile to fd 2...
	I0916 10:37:32.627562    3357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:37:32.627679    3357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:37:32.628736    3357 out.go:352] Setting JSON to false
	I0916 10:37:32.644736    3357 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2216,"bootTime":1726506036,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:37:32.644805    3357 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:37:32.651745    3357 out.go:177] * [multinode-416000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:37:32.660412    3357 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:37:32.660463    3357 notify.go:220] Checking for updates...
	I0916 10:37:32.668545    3357 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:37:32.670124    3357 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:37:32.673544    3357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:37:32.676524    3357 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:37:32.679564    3357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:37:32.682775    3357 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:37:32.687495    3357 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:37:32.694557    3357 start.go:297] selected driver: qemu2
	I0916 10:37:32.694564    3357 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:37:32.694572    3357 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:37:32.696784    3357 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:37:32.699513    3357 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:37:32.702628    3357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:37:32.702644    3357 cni.go:84] Creating CNI manager for ""
	I0916 10:37:32.702662    3357 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 10:37:32.702666    3357 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:37:32.702698    3357 start.go:340] cluster config:
	{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:37:32.706227    3357 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:37:32.713496    3357 out.go:177] * Starting "multinode-416000" primary control-plane node in "multinode-416000" cluster
	I0916 10:37:32.716552    3357 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:37:32.716568    3357 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:37:32.716578    3357 cache.go:56] Caching tarball of preloaded images
	I0916 10:37:32.716646    3357 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:37:32.716651    3357 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:37:32.716870    3357 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/multinode-416000/config.json ...
	I0916 10:37:32.716881    3357 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/multinode-416000/config.json: {Name:mkffe92f8c81f8ba00470902547c0969edd08c59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:37:32.717109    3357 start.go:360] acquireMachinesLock for multinode-416000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:37:32.717142    3357 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "multinode-416000"
	I0916 10:37:32.717152    3357 start.go:93] Provisioning new machine with config: &{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:37:32.717182    3357 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:37:32.723523    3357 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:37:32.740609    3357 start.go:159] libmachine.API.Create for "multinode-416000" (driver="qemu2")
	I0916 10:37:32.740638    3357 client.go:168] LocalClient.Create starting
	I0916 10:37:32.740698    3357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:37:32.740729    3357 main.go:141] libmachine: Decoding PEM data...
	I0916 10:37:32.740738    3357 main.go:141] libmachine: Parsing certificate...
	I0916 10:37:32.740773    3357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:37:32.740795    3357 main.go:141] libmachine: Decoding PEM data...
	I0916 10:37:32.740804    3357 main.go:141] libmachine: Parsing certificate...
	I0916 10:37:32.741190    3357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:37:32.904145    3357 main.go:141] libmachine: Creating SSH key...
	I0916 10:37:33.096214    3357 main.go:141] libmachine: Creating Disk image...
	I0916 10:37:33.096224    3357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:37:33.096416    3357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2
	I0916 10:37:33.106033    3357 main.go:141] libmachine: STDOUT: 
	I0916 10:37:33.106048    3357 main.go:141] libmachine: STDERR: 
	I0916 10:37:33.106125    3357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2 +20000M
	I0916 10:37:33.113974    3357 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:37:33.113989    3357 main.go:141] libmachine: STDERR: 
	I0916 10:37:33.114000    3357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2
	I0916 10:37:33.114008    3357 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:37:33.114022    3357 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:37:33.114053    3357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:b1:80:88:8b:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2
	I0916 10:37:33.115680    3357 main.go:141] libmachine: STDOUT: 
	I0916 10:37:33.115693    3357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:37:33.115713    3357 client.go:171] duration metric: took 375.078583ms to LocalClient.Create
	I0916 10:37:35.117848    3357 start.go:128] duration metric: took 2.400701208s to createHost
	I0916 10:37:35.117913    3357 start.go:83] releasing machines lock for "multinode-416000", held for 2.400818s
	W0916 10:37:35.117967    3357 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:37:35.130388    3357 out.go:177] * Deleting "multinode-416000" in qemu2 ...
	W0916 10:37:35.162129    3357 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:37:35.162152    3357 start.go:729] Will try again in 5 seconds ...
	I0916 10:37:40.164219    3357 start.go:360] acquireMachinesLock for multinode-416000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:37:40.164674    3357 start.go:364] duration metric: took 374.834µs to acquireMachinesLock for "multinode-416000"
	I0916 10:37:40.164793    3357 start.go:93] Provisioning new machine with config: &{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:37:40.165060    3357 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:37:40.170799    3357 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:37:40.220894    3357 start.go:159] libmachine.API.Create for "multinode-416000" (driver="qemu2")
	I0916 10:37:40.220947    3357 client.go:168] LocalClient.Create starting
	I0916 10:37:40.221093    3357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:37:40.221154    3357 main.go:141] libmachine: Decoding PEM data...
	I0916 10:37:40.221172    3357 main.go:141] libmachine: Parsing certificate...
	I0916 10:37:40.221248    3357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:37:40.221291    3357 main.go:141] libmachine: Decoding PEM data...
	I0916 10:37:40.221308    3357 main.go:141] libmachine: Parsing certificate...
	I0916 10:37:40.221843    3357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:37:40.394221    3357 main.go:141] libmachine: Creating SSH key...
	I0916 10:37:40.479440    3357 main.go:141] libmachine: Creating Disk image...
	I0916 10:37:40.479445    3357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:37:40.479619    3357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2
	I0916 10:37:40.488972    3357 main.go:141] libmachine: STDOUT: 
	I0916 10:37:40.488988    3357 main.go:141] libmachine: STDERR: 
	I0916 10:37:40.489040    3357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2 +20000M
	I0916 10:37:40.496923    3357 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:37:40.496940    3357 main.go:141] libmachine: STDERR: 
	I0916 10:37:40.496953    3357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2
	I0916 10:37:40.496957    3357 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:37:40.496965    3357 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:37:40.497003    3357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:3b:5a:83:23:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2
	I0916 10:37:40.498700    3357 main.go:141] libmachine: STDOUT: 
	I0916 10:37:40.498715    3357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:37:40.498731    3357 client.go:171] duration metric: took 277.78625ms to LocalClient.Create
	I0916 10:37:42.500863    3357 start.go:128] duration metric: took 2.3358265s to createHost
	I0916 10:37:42.500935    3357 start.go:83] releasing machines lock for "multinode-416000", held for 2.336291375s
	W0916 10:37:42.501346    3357 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-416000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-416000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:37:42.510969    3357 out.go:201] 
	W0916 10:37:42.523142    3357 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:37:42.523169    3357 out.go:270] * 
	* 
	W0916 10:37:42.524877    3357 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:37:42.533887    3357 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-416000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (68.743083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (105.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (131.297916ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-416000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- rollout status deployment/busybox: exit status 1 (57.910042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.965292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.211667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.669791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.991708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.121583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.903292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.045791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.455708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0916 10:38:27.189783    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.229ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.139709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.174459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.452458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.449208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.621583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.78875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (29.639166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (105.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-416000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.994917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (29.948166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-416000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-416000 -v 3 --alsologtostderr: exit status 83 (41.635875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-416000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-416000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:39:28.299739    3452 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:39:28.299915    3452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:28.299918    3452 out.go:358] Setting ErrFile to fd 2...
	I0916 10:39:28.299920    3452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:28.300043    3452 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:39:28.300318    3452 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:39:28.300539    3452 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:39:28.305491    3452 out.go:177] * The control-plane node multinode-416000 host is not running: state=Stopped
	I0916 10:39:28.308384    3452 out.go:177]   To start a cluster, run: "minikube start -p multinode-416000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-416000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (29.917667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-416000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-416000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.633334ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-416000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-416000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-416000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (30.111875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-416000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-416000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-416000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-416000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (30.027833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status --output json --alsologtostderr: exit status 7 (30.355875ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-416000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:39:28.509202    3464 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:39:28.509391    3464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:28.509394    3464 out.go:358] Setting ErrFile to fd 2...
	I0916 10:39:28.509396    3464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:28.509527    3464 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:39:28.509653    3464 out.go:352] Setting JSON to true
	I0916 10:39:28.509662    3464 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:39:28.509730    3464 notify.go:220] Checking for updates...
	I0916 10:39:28.509884    3464 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:39:28.509890    3464 status.go:255] checking status of multinode-416000 ...
	I0916 10:39:28.510140    3464 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0916 10:39:28.510143    3464 status.go:343] host is not running, skipping remaining checks
	I0916 10:39:28.510145    3464 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-416000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (29.615917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 node stop m03: exit status 85 (46.023625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-416000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status: exit status 7 (30.140792ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr: exit status 7 (30.359583ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:39:28.646149    3472 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:39:28.646315    3472 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:28.646318    3472 out.go:358] Setting ErrFile to fd 2...
	I0916 10:39:28.646320    3472 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:28.646459    3472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:39:28.646625    3472 out.go:352] Setting JSON to false
	I0916 10:39:28.646634    3472 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:39:28.646693    3472 notify.go:220] Checking for updates...
	I0916 10:39:28.646828    3472 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:39:28.646834    3472 status.go:255] checking status of multinode-416000 ...
	I0916 10:39:28.647063    3472 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0916 10:39:28.647068    3472 status.go:343] host is not running, skipping remaining checks
	I0916 10:39:28.647070    3472 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr": multinode-416000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (30.172917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (46.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 node start m03 -v=7 --alsologtostderr: exit status 85 (42.741ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:39:28.705283    3476 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:39:28.705527    3476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:28.705530    3476 out.go:358] Setting ErrFile to fd 2...
	I0916 10:39:28.705532    3476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:28.705658    3476 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:39:28.705879    3476 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:39:28.706071    3476 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:39:28.709438    3476 out.go:201] 
	W0916 10:39:28.712390    3476 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0916 10:39:28.712395    3476 out.go:270] * 
	* 
	W0916 10:39:28.714013    3476 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:39:28.717378    3476 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0916 10:39:28.705283    3476 out.go:345] Setting OutFile to fd 1 ...
I0916 10:39:28.705527    3476 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:39:28.705530    3476 out.go:358] Setting ErrFile to fd 2...
I0916 10:39:28.705532    3476 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:39:28.705658    3476 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
I0916 10:39:28.705879    3476 mustload.go:65] Loading cluster: multinode-416000
I0916 10:39:28.706071    3476 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:39:28.709438    3476 out.go:201] 
W0916 10:39:28.712390    3476 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0916 10:39:28.712395    3476 out.go:270] * 
* 
W0916 10:39:28.714013    3476 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0916 10:39:28.717378    3476 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-416000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (30.31ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:39:28.749896    3478 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:39:28.750030    3478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:28.750033    3478 out.go:358] Setting ErrFile to fd 2...
	I0916 10:39:28.750035    3478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:28.750192    3478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:39:28.750311    3478 out.go:352] Setting JSON to false
	I0916 10:39:28.750319    3478 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:39:28.750376    3478 notify.go:220] Checking for updates...
	I0916 10:39:28.750533    3478 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:39:28.750539    3478 status.go:255] checking status of multinode-416000 ...
	I0916 10:39:28.750764    3478 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0916 10:39:28.750767    3478 status.go:343] host is not running, skipping remaining checks
	I0916 10:39:28.750769    3478 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (73.589917ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:39:30.234912    3480 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:39:30.235115    3480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:30.235119    3480 out.go:358] Setting ErrFile to fd 2...
	I0916 10:39:30.235123    3480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:30.235311    3480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:39:30.235463    3480 out.go:352] Setting JSON to false
	I0916 10:39:30.235474    3480 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:39:30.235515    3480 notify.go:220] Checking for updates...
	I0916 10:39:30.235747    3480 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:39:30.235754    3480 status.go:255] checking status of multinode-416000 ...
	I0916 10:39:30.236081    3480 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0916 10:39:30.236086    3480 status.go:343] host is not running, skipping remaining checks
	I0916 10:39:30.236089    3480 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (73.036417ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:39:31.696840    3482 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:39:31.697041    3482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:31.697045    3482 out.go:358] Setting ErrFile to fd 2...
	I0916 10:39:31.697048    3482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:31.697217    3482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:39:31.697381    3482 out.go:352] Setting JSON to false
	I0916 10:39:31.697392    3482 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:39:31.697449    3482 notify.go:220] Checking for updates...
	I0916 10:39:31.697687    3482 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:39:31.697694    3482 status.go:255] checking status of multinode-416000 ...
	I0916 10:39:31.698001    3482 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0916 10:39:31.698006    3482 status.go:343] host is not running, skipping remaining checks
	I0916 10:39:31.698009    3482 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (75.123083ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:39:34.224370    3484 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:39:34.224550    3484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:34.224558    3484 out.go:358] Setting ErrFile to fd 2...
	I0916 10:39:34.224561    3484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:34.224731    3484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:39:34.224900    3484 out.go:352] Setting JSON to false
	I0916 10:39:34.224912    3484 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:39:34.224950    3484 notify.go:220] Checking for updates...
	I0916 10:39:34.225180    3484 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:39:34.225188    3484 status.go:255] checking status of multinode-416000 ...
	I0916 10:39:34.225489    3484 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0916 10:39:34.225493    3484 status.go:343] host is not running, skipping remaining checks
	I0916 10:39:34.225496    3484 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (72.809083ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:39:36.410268    3486 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:39:36.410450    3486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:36.410455    3486 out.go:358] Setting ErrFile to fd 2...
	I0916 10:39:36.410458    3486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:36.410613    3486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:39:36.410764    3486 out.go:352] Setting JSON to false
	I0916 10:39:36.410776    3486 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:39:36.410807    3486 notify.go:220] Checking for updates...
	I0916 10:39:36.411061    3486 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:39:36.411068    3486 status.go:255] checking status of multinode-416000 ...
	I0916 10:39:36.411401    3486 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0916 10:39:36.411406    3486 status.go:343] host is not running, skipping remaining checks
	I0916 10:39:36.411410    3486 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (73.296542ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:39:42.805557    3488 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:39:42.805740    3488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:42.805744    3488 out.go:358] Setting ErrFile to fd 2...
	I0916 10:39:42.805747    3488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:42.805922    3488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:39:42.806086    3488 out.go:352] Setting JSON to false
	I0916 10:39:42.806097    3488 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:39:42.806133    3488 notify.go:220] Checking for updates...
	I0916 10:39:42.806387    3488 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:39:42.806394    3488 status.go:255] checking status of multinode-416000 ...
	I0916 10:39:42.806733    3488 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0916 10:39:42.806737    3488 status.go:343] host is not running, skipping remaining checks
	I0916 10:39:42.806740    3488 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (69.211084ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:39:47.365005    3494 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:39:47.365230    3494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:47.365236    3494 out.go:358] Setting ErrFile to fd 2...
	I0916 10:39:47.365240    3494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:47.365434    3494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:39:47.365611    3494 out.go:352] Setting JSON to false
	I0916 10:39:47.365623    3494 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:39:47.365661    3494 notify.go:220] Checking for updates...
	I0916 10:39:47.365919    3494 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:39:47.365927    3494 status.go:255] checking status of multinode-416000 ...
	I0916 10:39:47.366279    3494 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0916 10:39:47.366284    3494 status.go:343] host is not running, skipping remaining checks
	I0916 10:39:47.366288    3494 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (75.564833ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:39:55.830304    3498 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:39:55.830532    3498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:55.830540    3498 out.go:358] Setting ErrFile to fd 2...
	I0916 10:39:55.830544    3498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:39:55.830743    3498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:39:55.830922    3498 out.go:352] Setting JSON to false
	I0916 10:39:55.830937    3498 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:39:55.831002    3498 notify.go:220] Checking for updates...
	I0916 10:39:55.831234    3498 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:39:55.831245    3498 status.go:255] checking status of multinode-416000 ...
	I0916 10:39:55.831588    3498 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0916 10:39:55.831593    3498 status.go:343] host is not running, skipping remaining checks
	I0916 10:39:55.831597    3498 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr: exit status 7 (74.158916ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:40:15.097433    3501 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:40:15.097657    3501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:15.097661    3501 out.go:358] Setting ErrFile to fd 2...
	I0916 10:40:15.097665    3501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:15.097863    3501 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:40:15.098046    3501 out.go:352] Setting JSON to false
	I0916 10:40:15.098057    3501 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:40:15.098109    3501 notify.go:220] Checking for updates...
	I0916 10:40:15.098348    3501 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:40:15.098355    3501 status.go:255] checking status of multinode-416000 ...
	I0916 10:40:15.098671    3501 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0916 10:40:15.098676    3501 status.go:343] host is not running, skipping remaining checks
	I0916 10:40:15.098679    3501 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-416000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (33.00725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (46.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-416000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-416000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-416000: (3.303982792s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-416000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-416000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.2167545s)

                                                
                                                
-- stdout --
	* [multinode-416000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-416000" primary control-plane node in "multinode-416000" cluster
	* Restarting existing qemu2 VM for "multinode-416000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-416000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:40:18.531218    3527 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:40:18.531398    3527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:18.531403    3527 out.go:358] Setting ErrFile to fd 2...
	I0916 10:40:18.531406    3527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:18.531564    3527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:40:18.532830    3527 out.go:352] Setting JSON to false
	I0916 10:40:18.551986    3527 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2382,"bootTime":1726506036,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:40:18.552060    3527 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:40:18.556805    3527 out.go:177] * [multinode-416000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:40:18.563829    3527 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:40:18.563833    3527 notify.go:220] Checking for updates...
	I0916 10:40:18.569746    3527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:40:18.572669    3527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:40:18.575775    3527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:40:18.578759    3527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:40:18.581806    3527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:40:18.584995    3527 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:40:18.585053    3527 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:40:18.589774    3527 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:40:18.596714    3527 start.go:297] selected driver: qemu2
	I0916 10:40:18.596720    3527 start.go:901] validating driver "qemu2" against &{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:40:18.596770    3527 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:40:18.599283    3527 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:40:18.599319    3527 cni.go:84] Creating CNI manager for ""
	I0916 10:40:18.599352    3527 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:40:18.599401    3527 start.go:340] cluster config:
	{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-416000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:40:18.603310    3527 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:40:18.610737    3527 out.go:177] * Starting "multinode-416000" primary control-plane node in "multinode-416000" cluster
	I0916 10:40:18.614723    3527 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:40:18.614736    3527 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:40:18.614745    3527 cache.go:56] Caching tarball of preloaded images
	I0916 10:40:18.614804    3527 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:40:18.614809    3527 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:40:18.614858    3527 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/multinode-416000/config.json ...
	I0916 10:40:18.615307    3527 start.go:360] acquireMachinesLock for multinode-416000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:40:18.615341    3527 start.go:364] duration metric: took 27.958µs to acquireMachinesLock for "multinode-416000"
	I0916 10:40:18.615350    3527 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:40:18.615355    3527 fix.go:54] fixHost starting: 
	I0916 10:40:18.615473    3527 fix.go:112] recreateIfNeeded on multinode-416000: state=Stopped err=<nil>
	W0916 10:40:18.615482    3527 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:40:18.623748    3527 out.go:177] * Restarting existing qemu2 VM for "multinode-416000" ...
	I0916 10:40:18.627701    3527 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:40:18.627736    3527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:3b:5a:83:23:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2
	I0916 10:40:18.629714    3527 main.go:141] libmachine: STDOUT: 
	I0916 10:40:18.629733    3527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:40:18.629767    3527 fix.go:56] duration metric: took 14.411208ms for fixHost
	I0916 10:40:18.629772    3527 start.go:83] releasing machines lock for "multinode-416000", held for 14.426208ms
	W0916 10:40:18.629777    3527 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:40:18.629815    3527 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:40:18.629823    3527 start.go:729] Will try again in 5 seconds ...
	I0916 10:40:23.631429    3527 start.go:360] acquireMachinesLock for multinode-416000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:40:23.631846    3527 start.go:364] duration metric: took 324.291µs to acquireMachinesLock for "multinode-416000"
	I0916 10:40:23.631986    3527 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:40:23.632007    3527 fix.go:54] fixHost starting: 
	I0916 10:40:23.632791    3527 fix.go:112] recreateIfNeeded on multinode-416000: state=Stopped err=<nil>
	W0916 10:40:23.632821    3527 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:40:23.637363    3527 out.go:177] * Restarting existing qemu2 VM for "multinode-416000" ...
	I0916 10:40:23.641238    3527 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:40:23.641461    3527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:3b:5a:83:23:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2
	I0916 10:40:23.650761    3527 main.go:141] libmachine: STDOUT: 
	I0916 10:40:23.650820    3527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:40:23.650893    3527 fix.go:56] duration metric: took 18.887667ms for fixHost
	I0916 10:40:23.650908    3527 start.go:83] releasing machines lock for "multinode-416000", held for 19.040084ms
	W0916 10:40:23.651080    3527 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-416000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-416000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:40:23.658216    3527 out.go:201] 
	W0916 10:40:23.662309    3527 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:40:23.662351    3527 out.go:270] * 
	* 
	W0916 10:40:23.664874    3527 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:40:23.672240    3527 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-416000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-416000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (32.540875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 node delete m03: exit status 83 (40.006208ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-416000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-416000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-416000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr: exit status 7 (29.589584ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:40:23.854413    3543 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:40:23.854566    3543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:23.854569    3543 out.go:358] Setting ErrFile to fd 2...
	I0916 10:40:23.854572    3543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:23.854700    3543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:40:23.854831    3543 out.go:352] Setting JSON to false
	I0916 10:40:23.854840    3543 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:40:23.854906    3543 notify.go:220] Checking for updates...
	I0916 10:40:23.855046    3543 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:40:23.855052    3543 status.go:255] checking status of multinode-416000 ...
	I0916 10:40:23.855307    3543 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0916 10:40:23.855310    3543 status.go:343] host is not running, skipping remaining checks
	I0916 10:40:23.855312    3543 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (29.801667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-416000 stop: (3.319516833s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status: exit status 7 (63.226458ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr: exit status 7 (32.466416ms)

                                                
                                                
-- stdout --
	multinode-416000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:40:27.299963    3567 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:40:27.300142    3567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:27.300145    3567 out.go:358] Setting ErrFile to fd 2...
	I0916 10:40:27.300147    3567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:27.300300    3567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:40:27.300429    3567 out.go:352] Setting JSON to false
	I0916 10:40:27.300438    3567 mustload.go:65] Loading cluster: multinode-416000
	I0916 10:40:27.300507    3567 notify.go:220] Checking for updates...
	I0916 10:40:27.300655    3567 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:40:27.300661    3567 status.go:255] checking status of multinode-416000 ...
	I0916 10:40:27.300886    3567 status.go:330] multinode-416000 host status = "Stopped" (err=<nil>)
	I0916 10:40:27.300890    3567 status.go:343] host is not running, skipping remaining checks
	I0916 10:40:27.300892    3567 status.go:257] multinode-416000 status: &{Name:multinode-416000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr": multinode-416000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-416000 status --alsologtostderr": multinode-416000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (30.24275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-416000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-416000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.180866333s)

                                                
                                                
-- stdout --
	* [multinode-416000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-416000" primary control-plane node in "multinode-416000" cluster
	* Restarting existing qemu2 VM for "multinode-416000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-416000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:40:27.360283    3571 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:40:27.360431    3571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:27.360435    3571 out.go:358] Setting ErrFile to fd 2...
	I0916 10:40:27.360437    3571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:27.360566    3571 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:40:27.361527    3571 out.go:352] Setting JSON to false
	I0916 10:40:27.377770    3571 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2391,"bootTime":1726506036,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:40:27.377870    3571 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:40:27.383194    3571 out.go:177] * [multinode-416000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:40:27.390089    3571 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:40:27.390118    3571 notify.go:220] Checking for updates...
	I0916 10:40:27.398060    3571 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:40:27.401130    3571 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:40:27.404086    3571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:40:27.407019    3571 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:40:27.410038    3571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:40:27.413354    3571 config.go:182] Loaded profile config "multinode-416000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:40:27.413612    3571 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:40:27.418047    3571 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:40:27.425115    3571 start.go:297] selected driver: qemu2
	I0916 10:40:27.425121    3571 start.go:901] validating driver "qemu2" against &{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-416000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:40:27.425178    3571 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:40:27.427677    3571 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:40:27.427715    3571 cni.go:84] Creating CNI manager for ""
	I0916 10:40:27.427734    3571 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:40:27.427794    3571 start.go:340] cluster config:
	{Name:multinode-416000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-416000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:40:27.431650    3571 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:40:27.439103    3571 out.go:177] * Starting "multinode-416000" primary control-plane node in "multinode-416000" cluster
	I0916 10:40:27.442902    3571 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:40:27.442917    3571 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:40:27.442932    3571 cache.go:56] Caching tarball of preloaded images
	I0916 10:40:27.442992    3571 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:40:27.442998    3571 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:40:27.443055    3571 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/multinode-416000/config.json ...
	I0916 10:40:27.443500    3571 start.go:360] acquireMachinesLock for multinode-416000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:40:27.443531    3571 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "multinode-416000"
	I0916 10:40:27.443544    3571 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:40:27.443550    3571 fix.go:54] fixHost starting: 
	I0916 10:40:27.443660    3571 fix.go:112] recreateIfNeeded on multinode-416000: state=Stopped err=<nil>
	W0916 10:40:27.443668    3571 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:40:27.452079    3571 out.go:177] * Restarting existing qemu2 VM for "multinode-416000" ...
	I0916 10:40:27.456043    3571 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:40:27.456077    3571 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:3b:5a:83:23:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2
	I0916 10:40:27.458158    3571 main.go:141] libmachine: STDOUT: 
	I0916 10:40:27.458178    3571 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:40:27.458209    3571 fix.go:56] duration metric: took 14.659042ms for fixHost
	I0916 10:40:27.458213    3571 start.go:83] releasing machines lock for "multinode-416000", held for 14.677666ms
	W0916 10:40:27.458219    3571 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:40:27.458253    3571 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:40:27.458261    3571 start.go:729] Will try again in 5 seconds ...
	I0916 10:40:32.460285    3571 start.go:360] acquireMachinesLock for multinode-416000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:40:32.460618    3571 start.go:364] duration metric: took 271.959µs to acquireMachinesLock for "multinode-416000"
	I0916 10:40:32.460756    3571 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:40:32.460773    3571 fix.go:54] fixHost starting: 
	I0916 10:40:32.461536    3571 fix.go:112] recreateIfNeeded on multinode-416000: state=Stopped err=<nil>
	W0916 10:40:32.461561    3571 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:40:32.466076    3571 out.go:177] * Restarting existing qemu2 VM for "multinode-416000" ...
	I0916 10:40:32.469923    3571 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:40:32.470095    3571 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:3b:5a:83:23:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/multinode-416000/disk.qcow2
	I0916 10:40:32.479622    3571 main.go:141] libmachine: STDOUT: 
	I0916 10:40:32.479721    3571 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:40:32.479812    3571 fix.go:56] duration metric: took 19.040708ms for fixHost
	I0916 10:40:32.479825    3571 start.go:83] releasing machines lock for "multinode-416000", held for 19.183667ms
	W0916 10:40:32.479976    3571 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-416000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-416000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:40:32.485906    3571 out.go:201] 
	W0916 10:40:32.489967    3571 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:40:32.489990    3571 out.go:270] * 
	* 
	W0916 10:40:32.492569    3571 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:40:32.499906    3571 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-416000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (66.81625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-416000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-416000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-416000-m01 --driver=qemu2 : exit status 80 (9.852067s)

                                                
                                                
-- stdout --
	* [multinode-416000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-416000-m01" primary control-plane node in "multinode-416000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-416000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-416000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-416000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-416000-m02 --driver=qemu2 : exit status 80 (10.019174792s)

                                                
                                                
-- stdout --
	* [multinode-416000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-416000-m02" primary control-plane node in "multinode-416000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-416000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-416000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-416000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-416000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-416000: exit status 83 (79.447541ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-416000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-416000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-416000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-416000 -n multinode-416000: exit status 7 (30.098209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-416000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.10s)

                                                
                                    
x
+
TestPreload (10.08s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-006000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-006000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.923914959s)

                                                
                                                
-- stdout --
	* [test-preload-006000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-006000" primary control-plane node in "test-preload-006000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-006000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:40:52.833347    3632 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:40:52.833530    3632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:52.833534    3632 out.go:358] Setting ErrFile to fd 2...
	I0916 10:40:52.833536    3632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:52.833667    3632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:40:52.834610    3632 out.go:352] Setting JSON to false
	I0916 10:40:52.850792    3632 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2416,"bootTime":1726506036,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:40:52.850886    3632 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:40:52.859024    3632 out.go:177] * [test-preload-006000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:40:52.863077    3632 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:40:52.863100    3632 notify.go:220] Checking for updates...
	I0916 10:40:52.869976    3632 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:40:52.873058    3632 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:40:52.874609    3632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:40:52.877977    3632 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:40:52.881062    3632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:40:52.884405    3632 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:40:52.884458    3632 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:40:52.888987    3632 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:40:52.896008    3632 start.go:297] selected driver: qemu2
	I0916 10:40:52.896013    3632 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:40:52.896020    3632 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:40:52.898289    3632 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:40:52.901942    3632 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:40:52.905131    3632 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:40:52.905172    3632 cni.go:84] Creating CNI manager for ""
	I0916 10:40:52.905195    3632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:40:52.905200    3632 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:40:52.905232    3632 start.go:340] cluster config:
	{Name:test-preload-006000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-006000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:40:52.909010    3632 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:40:52.915997    3632 out.go:177] * Starting "test-preload-006000" primary control-plane node in "test-preload-006000" cluster
	I0916 10:40:52.920012    3632 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0916 10:40:52.920107    3632 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/test-preload-006000/config.json ...
	I0916 10:40:52.920130    3632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/test-preload-006000/config.json: {Name:mk0ed2c8560422a8b36999614d5fe43b639829f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:40:52.920117    3632 cache.go:107] acquiring lock: {Name:mkde49bb287bbe34779fa813ad9c7bbddd51b206 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:40:52.920117    3632 cache.go:107] acquiring lock: {Name:mk9957ee1584da5e9c74daf97ce53b8c1c1ab620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:40:52.920175    3632 cache.go:107] acquiring lock: {Name:mk2c3ae1873ccd27c4843758e7dc3e78f2607825 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:40:52.920243    3632 cache.go:107] acquiring lock: {Name:mkeb91e8854959ae932baa48137e13e7871435b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:40:52.920365    3632 cache.go:107] acquiring lock: {Name:mk2029c5b399c7cc0da9524250fde292798ac3d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:40:52.920389    3632 cache.go:107] acquiring lock: {Name:mke1552684e5cdbeffa9342767aa37edcf687a28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:40:52.920392    3632 cache.go:107] acquiring lock: {Name:mk9b005388e004a92c6ed6a97a208115cae1781f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:40:52.920411    3632 cache.go:107] acquiring lock: {Name:mk8275fb6f77dcb52b53b6fa291d336960350bb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:40:52.920552    3632 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0916 10:40:52.920596    3632 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:40:52.920601    3632 start.go:360] acquireMachinesLock for test-preload-006000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:40:52.920624    3632 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0916 10:40:52.920639    3632 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0916 10:40:52.920613    3632 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0916 10:40:52.920613    3632 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0916 10:40:52.920720    3632 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:40:52.920736    3632 start.go:364] duration metric: took 114.708µs to acquireMachinesLock for "test-preload-006000"
	I0916 10:40:52.920749    3632 start.go:93] Provisioning new machine with config: &{Name:test-preload-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-006000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:40:52.920793    3632 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:40:52.920850    3632 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0916 10:40:52.927935    3632 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:40:52.932132    3632 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:40:52.932173    3632 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0916 10:40:52.932661    3632 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:40:52.935274    3632 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0916 10:40:52.935275    3632 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0916 10:40:52.935278    3632 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0916 10:40:52.935290    3632 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0916 10:40:52.935310    3632 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0916 10:40:52.946513    3632 start.go:159] libmachine.API.Create for "test-preload-006000" (driver="qemu2")
	I0916 10:40:52.946535    3632 client.go:168] LocalClient.Create starting
	I0916 10:40:52.946623    3632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:40:52.946654    3632 main.go:141] libmachine: Decoding PEM data...
	I0916 10:40:52.946663    3632 main.go:141] libmachine: Parsing certificate...
	I0916 10:40:52.946705    3632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:40:52.946731    3632 main.go:141] libmachine: Decoding PEM data...
	I0916 10:40:52.946740    3632 main.go:141] libmachine: Parsing certificate...
	I0916 10:40:52.947087    3632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:40:53.114153    3632 main.go:141] libmachine: Creating SSH key...
	I0916 10:40:53.211499    3632 main.go:141] libmachine: Creating Disk image...
	I0916 10:40:53.211522    3632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:40:53.211715    3632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/disk.qcow2
	I0916 10:40:53.221742    3632 main.go:141] libmachine: STDOUT: 
	I0916 10:40:53.221761    3632 main.go:141] libmachine: STDERR: 
	I0916 10:40:53.221827    3632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/disk.qcow2 +20000M
	I0916 10:40:53.230507    3632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:40:53.230535    3632 main.go:141] libmachine: STDERR: 
	I0916 10:40:53.230551    3632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/disk.qcow2
	I0916 10:40:53.230560    3632 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:40:53.230575    3632 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:40:53.230608    3632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:21:f6:e9:f5:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/disk.qcow2
	I0916 10:40:53.232395    3632 main.go:141] libmachine: STDOUT: 
	I0916 10:40:53.232411    3632 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:40:53.232433    3632 client.go:171] duration metric: took 285.898209ms to LocalClient.Create
	W0916 10:40:53.439473    3632 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0916 10:40:53.439513    3632 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0916 10:40:53.441804    3632 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0916 10:40:53.469111    3632 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0916 10:40:53.473201    3632 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0916 10:40:53.476573    3632 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0916 10:40:53.482957    3632 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0916 10:40:53.544648    3632 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0916 10:40:53.609242    3632 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0916 10:40:53.609302    3632 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 689.073583ms
	I0916 10:40:53.609331    3632 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0916 10:40:53.801294    3632 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0916 10:40:53.801399    3632 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 10:40:54.300577    3632 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 10:40:54.300626    3632 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.380541666s
	I0916 10:40:54.300651    3632 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 10:40:55.232672    3632 start.go:128] duration metric: took 2.311903s to createHost
	I0916 10:40:55.232721    3632 start.go:83] releasing machines lock for "test-preload-006000", held for 2.312030042s
	W0916 10:40:55.232770    3632 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:40:55.250943    3632 out.go:177] * Deleting "test-preload-006000" in qemu2 ...
	W0916 10:40:55.286614    3632 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:40:55.286644    3632 start.go:729] Will try again in 5 seconds ...
	I0916 10:40:55.331446    3632 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0916 10:40:55.331481    3632 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.411122958s
	I0916 10:40:55.331507    3632 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0916 10:40:56.917301    3632 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0916 10:40:56.917350    3632 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.99719125s
	I0916 10:40:56.917381    3632 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0916 10:40:58.236879    3632 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0916 10:40:58.236927    3632 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.316934083s
	I0916 10:40:58.236976    3632 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0916 10:40:58.635373    3632 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0916 10:40:58.635425    3632 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.715409084s
	I0916 10:40:58.635451    3632 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0916 10:40:59.103719    3632 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0916 10:40:59.103767    3632 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.183593292s
	I0916 10:40:59.103791    3632 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0916 10:41:00.286728    3632 start.go:360] acquireMachinesLock for test-preload-006000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:41:00.287182    3632 start.go:364] duration metric: took 377.542µs to acquireMachinesLock for "test-preload-006000"
	I0916 10:41:00.287285    3632 start.go:93] Provisioning new machine with config: &{Name:test-preload-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-006000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:41:00.287571    3632 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:41:00.298064    3632 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:41:00.349381    3632 start.go:159] libmachine.API.Create for "test-preload-006000" (driver="qemu2")
	I0916 10:41:00.349456    3632 client.go:168] LocalClient.Create starting
	I0916 10:41:00.349614    3632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:41:00.349677    3632 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:00.349696    3632 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:00.349766    3632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:41:00.349812    3632 main.go:141] libmachine: Decoding PEM data...
	I0916 10:41:00.349832    3632 main.go:141] libmachine: Parsing certificate...
	I0916 10:41:00.350329    3632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:41:00.520925    3632 main.go:141] libmachine: Creating SSH key...
	I0916 10:41:00.669353    3632 main.go:141] libmachine: Creating Disk image...
	I0916 10:41:00.669363    3632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:41:00.669530    3632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/disk.qcow2
	I0916 10:41:00.678823    3632 main.go:141] libmachine: STDOUT: 
	I0916 10:41:00.678834    3632 main.go:141] libmachine: STDERR: 
	I0916 10:41:00.678890    3632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/disk.qcow2 +20000M
	I0916 10:41:00.686926    3632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:41:00.686943    3632 main.go:141] libmachine: STDERR: 
	I0916 10:41:00.686957    3632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/disk.qcow2
	I0916 10:41:00.686960    3632 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:41:00.686974    3632 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:41:00.687012    3632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:f3:b8:95:68:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/test-preload-006000/disk.qcow2
	I0916 10:41:00.688779    3632 main.go:141] libmachine: STDOUT: 
	I0916 10:41:00.688793    3632 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:41:00.688806    3632 client.go:171] duration metric: took 339.344208ms to LocalClient.Create
	I0916 10:41:02.688991    3632 start.go:128] duration metric: took 2.40143975s to createHost
	I0916 10:41:02.689036    3632 start.go:83] releasing machines lock for "test-preload-006000", held for 2.401887208s
	W0916 10:41:02.689289    3632 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-006000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-006000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:41:02.698973    3632 out.go:201] 
	W0916 10:41:02.702801    3632 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:41:02.702844    3632 out.go:270] * 
	* 
	W0916 10:41:02.705233    3632 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:41:02.715949    3632 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-006000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-16 10:41:02.731377 -0700 PDT m=+2200.876937460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-006000 -n test-preload-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-006000 -n test-preload-006000: exit status 7 (68.955875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-006000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-006000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-006000
--- FAIL: TestPreload (10.08s)

                                                
                                    
x
+
TestScheduledStopUnix (10.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-323000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-323000 --memory=2048 --driver=qemu2 : exit status 80 (9.943521083s)

                                                
                                                
-- stdout --
	* [scheduled-stop-323000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-323000" primary control-plane node in "scheduled-stop-323000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-323000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-323000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-323000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-323000" primary control-plane node in "scheduled-stop-323000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-323000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-323000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-16 10:41:12.826436 -0700 PDT m=+2210.972233668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-323000 -n scheduled-stop-323000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-323000 -n scheduled-stop-323000: exit status 7 (69.065291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-323000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-323000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-323000
--- FAIL: TestScheduledStopUnix (10.10s)

                                                
                                    
x
+
TestSkaffold (12.42s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2133613719 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2133613719 version: (1.061597459s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-578000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-578000 --memory=2600 --driver=qemu2 : exit status 80 (9.791981166s)

                                                
                                                
-- stdout --
	* [skaffold-578000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-578000" primary control-plane node in "skaffold-578000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-578000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-578000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-578000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-578000" primary control-plane node in "skaffold-578000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-578000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-578000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-16 10:41:25.253579 -0700 PDT m=+2223.399669751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-578000 -n skaffold-578000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-578000 -n skaffold-578000: exit status 7 (61.09825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-578000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-578000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-578000
--- FAIL: TestSkaffold (12.42s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (598.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1706634703 start -p running-upgrade-707000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1706634703 start -p running-upgrade-707000 --memory=2200 --vm-driver=qemu2 : (1m1.349237125s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-707000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0916 10:43:12.953492    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-707000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.224741667s)

                                                
                                                
-- stdout --
	* [running-upgrade-707000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-707000" primary control-plane node in "running-upgrade-707000" cluster
	* Updating the running qemu2 "running-upgrade-707000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:43:08.735460    4019 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:43:08.735690    4019 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:43:08.735693    4019 out.go:358] Setting ErrFile to fd 2...
	I0916 10:43:08.735696    4019 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:43:08.735830    4019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:43:08.737106    4019 out.go:352] Setting JSON to false
	I0916 10:43:08.753494    4019 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2552,"bootTime":1726506036,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:43:08.753560    4019 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:43:08.759743    4019 out.go:177] * [running-upgrade-707000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:43:08.767831    4019 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:43:08.767895    4019 notify.go:220] Checking for updates...
	I0916 10:43:08.774781    4019 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:43:08.777758    4019 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:43:08.780807    4019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:43:08.783721    4019 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:43:08.786772    4019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:43:08.790028    4019 config.go:182] Loaded profile config "running-upgrade-707000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:43:08.791614    4019 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 10:43:08.794778    4019 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:43:08.798753    4019 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:43:08.803801    4019 start.go:297] selected driver: qemu2
	I0916 10:43:08.803808    4019 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-707000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50291 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-707000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 10:43:08.803867    4019 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:43:08.806321    4019 cni.go:84] Creating CNI manager for ""
	I0916 10:43:08.806356    4019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:43:08.806386    4019 start.go:340] cluster config:
	{Name:running-upgrade-707000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50291 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-707000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 10:43:08.806439    4019 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:43:08.813742    4019 out.go:177] * Starting "running-upgrade-707000" primary control-plane node in "running-upgrade-707000" cluster
	I0916 10:43:08.817808    4019 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0916 10:43:08.817828    4019 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0916 10:43:08.817835    4019 cache.go:56] Caching tarball of preloaded images
	I0916 10:43:08.817899    4019 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:43:08.817905    4019 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0916 10:43:08.817957    4019 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/config.json ...
	I0916 10:43:08.818355    4019 start.go:360] acquireMachinesLock for running-upgrade-707000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:43:08.818392    4019 start.go:364] duration metric: took 30µs to acquireMachinesLock for "running-upgrade-707000"
	I0916 10:43:08.818401    4019 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:43:08.818406    4019 fix.go:54] fixHost starting: 
	I0916 10:43:08.819028    4019 fix.go:112] recreateIfNeeded on running-upgrade-707000: state=Running err=<nil>
	W0916 10:43:08.819037    4019 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:43:08.826733    4019 out.go:177] * Updating the running qemu2 "running-upgrade-707000" VM ...
	I0916 10:43:08.830738    4019 machine.go:93] provisionDockerMachine start ...
	I0916 10:43:08.830777    4019 main.go:141] libmachine: Using SSH client type: native
	I0916 10:43:08.830870    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101285190] 0x1012879d0 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0916 10:43:08.830875    4019 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:43:08.900278    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-707000
	
	I0916 10:43:08.900287    4019 buildroot.go:166] provisioning hostname "running-upgrade-707000"
	I0916 10:43:08.900328    4019 main.go:141] libmachine: Using SSH client type: native
	I0916 10:43:08.900428    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101285190] 0x1012879d0 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0916 10:43:08.900438    4019 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-707000 && echo "running-upgrade-707000" | sudo tee /etc/hostname
	I0916 10:43:08.975396    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-707000
	
	I0916 10:43:08.975457    4019 main.go:141] libmachine: Using SSH client type: native
	I0916 10:43:08.975564    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101285190] 0x1012879d0 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0916 10:43:08.975584    4019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-707000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-707000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-707000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:43:09.043360    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:43:09.043399    4019 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19649-964/.minikube CaCertPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19649-964/.minikube}
	I0916 10:43:09.043407    4019 buildroot.go:174] setting up certificates
	I0916 10:43:09.043412    4019 provision.go:84] configureAuth start
	I0916 10:43:09.043417    4019 provision.go:143] copyHostCerts
	I0916 10:43:09.043471    4019 exec_runner.go:144] found /Users/jenkins/minikube-integration/19649-964/.minikube/ca.pem, removing ...
	I0916 10:43:09.043485    4019 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19649-964/.minikube/ca.pem
	I0916 10:43:09.043624    4019 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19649-964/.minikube/ca.pem (1082 bytes)
	I0916 10:43:09.043798    4019 exec_runner.go:144] found /Users/jenkins/minikube-integration/19649-964/.minikube/cert.pem, removing ...
	I0916 10:43:09.043802    4019 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19649-964/.minikube/cert.pem
	I0916 10:43:09.043859    4019 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19649-964/.minikube/cert.pem (1123 bytes)
	I0916 10:43:09.043961    4019 exec_runner.go:144] found /Users/jenkins/minikube-integration/19649-964/.minikube/key.pem, removing ...
	I0916 10:43:09.043964    4019 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19649-964/.minikube/key.pem
	I0916 10:43:09.044015    4019 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19649-964/.minikube/key.pem (1679 bytes)
	I0916 10:43:09.044115    4019 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19649-964/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-707000 san=[127.0.0.1 localhost minikube running-upgrade-707000]
	I0916 10:43:09.213371    4019 provision.go:177] copyRemoteCerts
	I0916 10:43:09.213425    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:43:09.213434    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/running-upgrade-707000/id_rsa Username:docker}
	I0916 10:43:09.250341    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:43:09.256832    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:43:09.263737    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0916 10:43:09.270391    4019 provision.go:87] duration metric: took 226.976833ms to configureAuth
	I0916 10:43:09.270400    4019 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:43:09.270513    4019 config.go:182] Loaded profile config "running-upgrade-707000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:43:09.270555    4019 main.go:141] libmachine: Using SSH client type: native
	I0916 10:43:09.270649    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101285190] 0x1012879d0 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0916 10:43:09.270655    4019 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 10:43:09.341001    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 10:43:09.341016    4019 buildroot.go:70] root file system type: tmpfs
	I0916 10:43:09.341068    4019 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 10:43:09.341117    4019 main.go:141] libmachine: Using SSH client type: native
	I0916 10:43:09.341230    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101285190] 0x1012879d0 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0916 10:43:09.341266    4019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 10:43:09.413572    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 10:43:09.413634    4019 main.go:141] libmachine: Using SSH client type: native
	I0916 10:43:09.413747    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101285190] 0x1012879d0 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0916 10:43:09.413757    4019 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 10:43:09.484249    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:43:09.484259    4019 machine.go:96] duration metric: took 653.531167ms to provisionDockerMachine
	I0916 10:43:09.484265    4019 start.go:293] postStartSetup for "running-upgrade-707000" (driver="qemu2")
	I0916 10:43:09.484271    4019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:43:09.484320    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:43:09.484329    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/running-upgrade-707000/id_rsa Username:docker}
	I0916 10:43:09.521836    4019 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:43:09.523134    4019 info.go:137] Remote host: Buildroot 2021.02.12
	I0916 10:43:09.523142    4019 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19649-964/.minikube/addons for local assets ...
	I0916 10:43:09.523237    4019 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19649-964/.minikube/files for local assets ...
	I0916 10:43:09.523353    4019 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19649-964/.minikube/files/etc/ssl/certs/14512.pem -> 14512.pem in /etc/ssl/certs
	I0916 10:43:09.523488    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:43:09.526207    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/files/etc/ssl/certs/14512.pem --> /etc/ssl/certs/14512.pem (1708 bytes)
	I0916 10:43:09.532837    4019 start.go:296] duration metric: took 48.567958ms for postStartSetup
	I0916 10:43:09.532854    4019 fix.go:56] duration metric: took 714.467ms for fixHost
	I0916 10:43:09.532910    4019 main.go:141] libmachine: Using SSH client type: native
	I0916 10:43:09.533027    4019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x101285190] 0x1012879d0 <nil>  [] 0s} localhost 50259 <nil> <nil>}
	I0916 10:43:09.533032    4019 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:43:09.602847    4019 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726508589.224637684
	
	I0916 10:43:09.602856    4019 fix.go:216] guest clock: 1726508589.224637684
	I0916 10:43:09.602860    4019 fix.go:229] Guest: 2024-09-16 10:43:09.224637684 -0700 PDT Remote: 2024-09-16 10:43:09.532863 -0700 PDT m=+0.818635168 (delta=-308.225316ms)
	I0916 10:43:09.602880    4019 fix.go:200] guest clock delta is within tolerance: -308.225316ms
	I0916 10:43:09.602882    4019 start.go:83] releasing machines lock for "running-upgrade-707000", held for 784.504083ms
	I0916 10:43:09.602950    4019 ssh_runner.go:195] Run: cat /version.json
	I0916 10:43:09.602959    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/running-upgrade-707000/id_rsa Username:docker}
	I0916 10:43:09.602950    4019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:43:09.602982    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/running-upgrade-707000/id_rsa Username:docker}
	W0916 10:43:09.603614    4019 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50259: connect: connection refused
	I0916 10:43:09.603635    4019 retry.go:31] will retry after 202.0459ms: dial tcp [::1]:50259: connect: connection refused
	W0916 10:43:09.639225    4019 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0916 10:43:09.639279    4019 ssh_runner.go:195] Run: systemctl --version
	I0916 10:43:09.641260    4019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:43:09.642895    4019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:43:09.642922    4019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0916 10:43:09.645662    4019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0916 10:43:09.650216    4019 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:43:09.650222    4019 start.go:495] detecting cgroup driver to use...
	I0916 10:43:09.650289    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:43:09.655278    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0916 10:43:09.658150    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:43:09.660946    4019 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:43:09.660972    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:43:09.664422    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:43:09.667886    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:43:09.671028    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:43:09.673847    4019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:43:09.676815    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:43:09.680300    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:43:09.683653    4019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:43:09.686641    4019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:43:09.689267    4019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:43:09.692436    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:43:09.787691    4019 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:43:09.794908    4019 start.go:495] detecting cgroup driver to use...
	I0916 10:43:09.794983    4019 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 10:43:09.800871    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:43:09.806250    4019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:43:09.814255    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:43:09.819169    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:43:09.823215    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:43:09.828975    4019 ssh_runner.go:195] Run: which cri-dockerd
	I0916 10:43:09.830478    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:43:09.833139    4019 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0916 10:43:09.837907    4019 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 10:43:09.923001    4019 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 10:43:10.013674    4019 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:43:10.013744    4019 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:43:10.018875    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:43:10.109963    4019 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 10:43:11.615912    4019 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.505967375s)
	I0916 10:43:11.615987    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:43:11.620974    4019 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0916 10:43:11.627647    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:43:11.632826    4019 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:43:11.714614    4019 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 10:43:11.792971    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:43:11.873506    4019 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 10:43:11.879644    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:43:11.883883    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:43:11.950410    4019 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 10:43:11.990563    4019 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:43:11.990660    4019 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 10:43:11.993047    4019 start.go:563] Will wait 60s for crictl version
	I0916 10:43:11.993104    4019 ssh_runner.go:195] Run: which crictl
	I0916 10:43:11.995263    4019 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:43:12.006449    4019 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0916 10:43:12.006541    4019 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 10:43:12.019373    4019 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 10:43:12.039615    4019 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0916 10:43:12.039763    4019 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0916 10:43:12.041154    4019 kubeadm.go:883] updating cluster {Name:running-upgrade-707000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50291 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-707000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0916 10:43:12.041197    4019 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0916 10:43:12.041249    4019 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 10:43:12.051836    4019 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 10:43:12.051844    4019 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0916 10:43:12.051900    4019 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 10:43:12.055304    4019 ssh_runner.go:195] Run: which lz4
	I0916 10:43:12.056619    4019 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:43:12.057847    4019 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:43:12.057860    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0916 10:43:12.991847    4019 docker.go:649] duration metric: took 935.290584ms to copy over tarball
	I0916 10:43:12.991914    4019 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:43:14.294590    4019 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.302691541s)
	I0916 10:43:14.294604    4019 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:43:14.310583    4019 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 10:43:14.313380    4019 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0916 10:43:14.318223    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:43:14.394913    4019 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 10:43:15.559765    4019 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.164853083s)
	I0916 10:43:15.559877    4019 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 10:43:15.573914    4019 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 10:43:15.573924    4019 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0916 10:43:15.573929    4019 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 10:43:15.578812    4019 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:43:15.581268    4019 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:43:15.582781    4019 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:43:15.582901    4019 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:43:15.585687    4019 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0916 10:43:15.585723    4019 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:43:15.587636    4019 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:43:15.587745    4019 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:43:15.589425    4019 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:43:15.589425    4019 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0916 10:43:15.591396    4019 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:43:15.591433    4019 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:43:15.592937    4019 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:43:15.592971    4019 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0916 10:43:15.594454    4019 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:43:15.595360    4019 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0916 10:43:15.986523    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0916 10:43:15.999213    4019 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0916 10:43:15.999246    4019 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0916 10:43:15.999314    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0916 10:43:16.011098    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0916 10:43:16.011230    4019 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0916 10:43:16.012916    4019 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0916 10:43:16.012928    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0916 10:43:16.020736    4019 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0916 10:43:16.020747    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0916 10:43:16.022236    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:43:16.025937    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:43:16.026310    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:43:16.047240    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:43:16.063191    4019 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0916 10:43:16.073884    4019 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0916 10:43:16.074047    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:43:16.077029    4019 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0916 10:43:16.077032    4019 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0916 10:43:16.077048    4019 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:43:16.077048    4019 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:43:16.077135    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:43:16.077095    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:43:16.078195    4019 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0916 10:43:16.078205    4019 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:43:16.078232    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:43:16.082796    4019 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0916 10:43:16.082816    4019 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:43:16.082885    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:43:16.097247    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0916 10:43:16.104232    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0916 10:43:16.104296    4019 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0916 10:43:16.104310    4019 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:43:16.104375    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:43:16.112539    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0916 10:43:16.112547    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0916 10:43:16.112840    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0916 10:43:16.120504    4019 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0916 10:43:16.120527    4019 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0916 10:43:16.120595    4019 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0916 10:43:16.125232    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0916 10:43:16.125375    4019 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0916 10:43:16.133483    4019 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0916 10:43:16.133515    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0916 10:43:16.133640    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0916 10:43:16.173550    4019 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0916 10:43:16.173563    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0916 10:43:16.212711    4019 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0916 10:43:16.397720    4019 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0916 10:43:16.398128    4019 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:43:16.426473    4019 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0916 10:43:16.426511    4019 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:43:16.426631    4019 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:43:16.809832    4019 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 10:43:16.810235    4019 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 10:43:16.815888    4019 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 10:43:16.815925    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0916 10:43:16.874191    4019 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 10:43:16.874206    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0916 10:43:17.109275    4019 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 10:43:17.109323    4019 cache_images.go:92] duration metric: took 1.535423s to LoadCachedImages
	W0916 10:43:17.109359    4019 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0916 10:43:17.109367    4019 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0916 10:43:17.109423    4019 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-707000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-707000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:43:17.109497    4019 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 10:43:17.123427    4019 cni.go:84] Creating CNI manager for ""
	I0916 10:43:17.123445    4019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:43:17.123451    4019 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:43:17.123460    4019 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-707000 NodeName:running-upgrade-707000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:43:17.123532    4019 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-707000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:43:17.123597    4019 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0916 10:43:17.126510    4019 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:43:17.126538    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:43:17.129075    4019 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0916 10:43:17.133957    4019 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:43:17.138917    4019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0916 10:43:17.144304    4019 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0916 10:43:17.145844    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:43:17.224524    4019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:43:17.230317    4019 certs.go:68] Setting up /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000 for IP: 10.0.2.15
	I0916 10:43:17.230325    4019 certs.go:194] generating shared ca certs ...
	I0916 10:43:17.230333    4019 certs.go:226] acquiring lock for ca certs: {Name:mk95bad6e61a22ab8ae5ec5f8cd43ca9ad7a3f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:43:17.230482    4019 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19649-964/.minikube/ca.key
	I0916 10:43:17.230519    4019 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.key
	I0916 10:43:17.230526    4019 certs.go:256] generating profile certs ...
	I0916 10:43:17.230588    4019 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/client.key
	I0916 10:43:17.230605    4019 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/apiserver.key.22fced3a
	I0916 10:43:17.230616    4019 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/apiserver.crt.22fced3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0916 10:43:17.452037    4019 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/apiserver.crt.22fced3a ...
	I0916 10:43:17.452053    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/apiserver.crt.22fced3a: {Name:mk288b5f7ee6dadeeb8869768ddb481ad785327a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:43:17.452361    4019 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/apiserver.key.22fced3a ...
	I0916 10:43:17.452366    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/apiserver.key.22fced3a: {Name:mk38bc614be600146f60fa621e675a9f89c4c8c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:43:17.452503    4019 certs.go:381] copying /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/apiserver.crt.22fced3a -> /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/apiserver.crt
	I0916 10:43:17.452636    4019 certs.go:385] copying /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/apiserver.key.22fced3a -> /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/apiserver.key
	I0916 10:43:17.452763    4019 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/proxy-client.key
	I0916 10:43:17.452905    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/1451.pem (1338 bytes)
	W0916 10:43:17.452934    4019 certs.go:480] ignoring /Users/jenkins/minikube-integration/19649-964/.minikube/certs/1451_empty.pem, impossibly tiny 0 bytes
	I0916 10:43:17.452940    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:43:17.452959    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:43:17.452977    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:43:17.452994    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/key.pem (1679 bytes)
	I0916 10:43:17.453033    4019 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/files/etc/ssl/certs/14512.pem (1708 bytes)
	I0916 10:43:17.453351    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:43:17.460736    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:43:17.468121    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:43:17.474823    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 10:43:17.481611    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 10:43:17.489087    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:43:17.496043    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:43:17.503427    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:43:17.527841    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/certs/1451.pem --> /usr/share/ca-certificates/1451.pem (1338 bytes)
	I0916 10:43:17.549258    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/files/etc/ssl/certs/14512.pem --> /usr/share/ca-certificates/14512.pem (1708 bytes)
	I0916 10:43:17.558084    4019 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:43:17.570029    4019 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:43:17.590558    4019 ssh_runner.go:195] Run: openssl version
	I0916 10:43:17.593290    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1451.pem && ln -fs /usr/share/ca-certificates/1451.pem /etc/ssl/certs/1451.pem"
	I0916 10:43:17.599636    4019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1451.pem
	I0916 10:43:17.603530    4019 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 17:19 /usr/share/ca-certificates/1451.pem
	I0916 10:43:17.603557    4019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1451.pem
	I0916 10:43:17.605849    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1451.pem /etc/ssl/certs/51391683.0"
	I0916 10:43:17.613680    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14512.pem && ln -fs /usr/share/ca-certificates/14512.pem /etc/ssl/certs/14512.pem"
	I0916 10:43:17.618678    4019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14512.pem
	I0916 10:43:17.620361    4019 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 17:19 /usr/share/ca-certificates/14512.pem
	I0916 10:43:17.620389    4019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14512.pem
	I0916 10:43:17.622401    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14512.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:43:17.627872    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:43:17.633412    4019 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:43:17.635139    4019 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:43:17.635162    4019 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:43:17.637128    4019 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:43:17.643385    4019 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:43:17.645131    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:43:17.647231    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:43:17.649363    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:43:17.653070    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:43:17.656190    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:43:17.659433    4019 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:43:17.666776    4019 kubeadm.go:392] StartCluster: {Name:running-upgrade-707000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50291 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-707000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 10:43:17.666873    4019 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:43:17.690151    4019 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:43:17.696479    4019 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:43:17.696490    4019 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:43:17.696525    4019 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:43:17.700575    4019 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:43:17.700830    4019 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-707000" does not appear in /Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:43:17.700884    4019 kubeconfig.go:62] /Users/jenkins/minikube-integration/19649-964/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-707000" cluster setting kubeconfig missing "running-upgrade-707000" context setting]
	I0916 10:43:17.701016    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/kubeconfig: {Name:mk3766c19461825f7de68cf1dc4ddceadf57e288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:43:17.701456    4019 kapi.go:59] client config for running-upgrade-707000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/client.key", CAFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10285d800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:43:17.701800    4019 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:43:17.717544    4019 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-707000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0916 10:43:17.717557    4019 kubeadm.go:1160] stopping kube-system containers ...
	I0916 10:43:17.717635    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:43:17.742800    4019 docker.go:483] Stopping containers: [5e74c48fb556 33f1d0a073dd 6346513872dc 50cc8955d438 72251de4dd90 2c8d46f20fa4 2c3f077fcf0b dd5a08326faa ccdae88208fa 59355d49e4e3 c4395639db33 cdbd741d12ba d0d54b121afb 375a57b20ac5]
	I0916 10:43:17.742887    4019 ssh_runner.go:195] Run: docker stop 5e74c48fb556 33f1d0a073dd 6346513872dc 50cc8955d438 72251de4dd90 2c8d46f20fa4 2c3f077fcf0b dd5a08326faa ccdae88208fa 59355d49e4e3 c4395639db33 cdbd741d12ba d0d54b121afb 375a57b20ac5
	I0916 10:43:17.926395    4019 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 10:43:18.042124    4019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:43:18.046519    4019 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 16 17:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 16 17:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 16 17:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 16 17:42 /etc/kubernetes/scheduler.conf
	
	I0916 10:43:18.046562    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/admin.conf
	I0916 10:43:18.050057    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:43:18.050090    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:43:18.053662    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/kubelet.conf
	I0916 10:43:18.057146    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:43:18.057180    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:43:18.060497    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/controller-manager.conf
	I0916 10:43:18.063513    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:43:18.063541    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:43:18.066105    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/scheduler.conf
	I0916 10:43:18.068988    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:43:18.069018    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:43:18.072061    4019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:43:18.074802    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:43:18.094892    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:43:18.487026    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:43:18.679365    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:43:18.700118    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:43:18.721225    4019 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:43:18.721292    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:43:19.223818    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:43:19.723665    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:43:20.222561    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:43:20.723380    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:43:20.727790    4019 api_server.go:72] duration metric: took 2.00661375s to wait for apiserver process to appear ...
	I0916 10:43:20.727799    4019 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:43:20.727809    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:43:25.728245    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:43:25.728275    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:43:30.729688    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:43:30.729731    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:43:35.730108    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:43:35.730203    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:43:40.731061    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:43:40.731110    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:43:45.731978    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:43:45.732075    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:43:50.733475    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:43:50.733585    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:43:55.735939    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:43:55.736030    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:44:00.737778    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:44:00.737886    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:44:05.740642    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:44:05.740749    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:44:10.743366    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:44:10.743460    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:44:15.746076    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:44:15.746178    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:44:20.749026    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:44:20.749475    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:44:20.782844    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:44:20.783012    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:44:20.803072    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:44:20.803201    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:44:20.816695    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:44:20.816786    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:44:20.828635    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:44:20.828725    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:44:20.839562    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:44:20.839653    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:44:20.850249    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:44:20.850339    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:44:20.860591    4019 logs.go:276] 0 containers: []
	W0916 10:44:20.860600    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:44:20.860664    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:44:20.871262    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:44:20.871277    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:44:20.871283    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:44:20.883728    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:44:20.883739    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:44:20.888308    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:44:20.888316    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:44:20.903075    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:44:20.903087    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:44:20.914465    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:44:20.914475    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:44:20.926819    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:44:20.926833    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:44:20.939698    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:44:20.939707    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:44:20.956145    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:44:20.956153    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:44:20.983074    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:44:20.983088    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:44:21.051725    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:44:21.051739    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:44:21.064502    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:44:21.064513    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:44:21.075529    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:44:21.075546    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:44:21.095601    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:44:21.095612    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:44:21.106648    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:44:21.106658    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:44:21.145285    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:44:21.145293    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:44:21.159374    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:44:21.159383    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:44:21.170264    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:44:21.170274    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:44:23.696542    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:44:28.697808    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:44:28.698134    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:44:28.733034    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:44:28.733155    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:44:28.757147    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:44:28.757243    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:44:28.775825    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:44:28.775910    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:44:28.786257    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:44:28.786347    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:44:28.801063    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:44:28.801144    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:44:28.811607    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:44:28.811691    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:44:28.822182    4019 logs.go:276] 0 containers: []
	W0916 10:44:28.822194    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:44:28.822261    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:44:28.832387    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:44:28.832404    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:44:28.832409    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:44:28.844050    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:44:28.844060    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:44:28.880063    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:44:28.880075    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:44:28.892136    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:44:28.892150    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:44:28.903018    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:44:28.903030    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:44:28.925765    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:44:28.925782    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:44:28.966460    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:44:28.966469    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:44:28.980573    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:44:28.980582    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:44:28.991901    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:44:28.991914    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:44:29.003562    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:44:29.003576    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:44:29.015255    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:44:29.015265    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:44:29.026576    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:44:29.026587    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:44:29.053367    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:44:29.053374    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:44:29.066483    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:44:29.066494    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:44:29.080498    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:44:29.080509    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:44:29.091809    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:44:29.091820    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:44:29.096154    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:44:29.096160    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:44:31.616386    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:44:36.617370    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:44:36.617672    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:44:36.641899    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:44:36.642040    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:44:36.660413    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:44:36.660511    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:44:36.672650    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:44:36.672736    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:44:36.683520    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:44:36.683607    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:44:36.695982    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:44:36.696053    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:44:36.706428    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:44:36.706503    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:44:36.716842    4019 logs.go:276] 0 containers: []
	W0916 10:44:36.716852    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:44:36.716914    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:44:36.730103    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:44:36.730119    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:44:36.730125    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:44:36.734771    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:44:36.734781    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:44:36.746240    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:44:36.746252    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:44:36.762224    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:44:36.762234    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:44:36.788511    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:44:36.788523    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:44:36.800566    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:44:36.800578    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:44:36.835350    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:44:36.835363    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:44:36.846705    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:44:36.846714    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:44:36.861511    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:44:36.861524    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:44:36.872876    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:44:36.872889    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:44:36.886622    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:44:36.886632    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:44:36.898285    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:44:36.898295    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:44:36.909877    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:44:36.909886    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:44:36.920914    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:44:36.920923    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:44:36.960946    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:44:36.960954    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:44:36.974274    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:44:36.974284    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:44:36.991578    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:44:36.991587    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:44:39.505284    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:44:44.507975    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:44:44.508537    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:44:44.556272    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:44:44.556450    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:44:44.576000    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:44:44.576118    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:44:44.593505    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:44:44.593602    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:44:44.610056    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:44:44.610143    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:44:44.621704    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:44:44.621784    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:44:44.632219    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:44:44.632302    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:44:44.642866    4019 logs.go:276] 0 containers: []
	W0916 10:44:44.642879    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:44:44.642953    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:44:44.653404    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:44:44.653422    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:44:44.653428    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:44:44.665491    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:44:44.665502    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:44:44.678064    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:44:44.678078    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:44:44.717238    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:44:44.717245    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:44:44.731778    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:44:44.731792    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:44:44.747028    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:44:44.747038    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:44:44.758923    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:44:44.758934    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:44:44.797839    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:44:44.797854    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:44:44.809672    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:44:44.809684    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:44:44.823445    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:44:44.823457    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:44:44.837998    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:44:44.838007    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:44:44.854281    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:44:44.854293    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:44:44.881153    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:44:44.881159    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:44:44.898428    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:44:44.898440    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:44:44.902829    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:44:44.902838    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:44:44.919283    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:44:44.919294    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:44:44.936508    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:44:44.936519    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:44:47.449729    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:44:52.452495    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:44:52.453025    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:44:52.487014    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:44:52.487182    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:44:52.506440    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:44:52.506555    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:44:52.520504    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:44:52.520595    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:44:52.532648    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:44:52.532731    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:44:52.543264    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:44:52.543334    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:44:52.558121    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:44:52.558194    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:44:52.572641    4019 logs.go:276] 0 containers: []
	W0916 10:44:52.572652    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:44:52.572713    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:44:52.585354    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:44:52.585373    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:44:52.585379    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:44:52.626256    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:44:52.626266    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:44:52.640410    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:44:52.640420    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:44:52.653149    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:44:52.653157    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:44:52.667683    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:44:52.667700    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:44:52.680245    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:44:52.680258    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:44:52.684670    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:44:52.684679    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:44:52.695634    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:44:52.695648    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:44:52.709523    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:44:52.709533    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:44:52.726519    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:44:52.726530    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:44:52.762813    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:44:52.762823    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:44:52.774064    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:44:52.774078    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:44:52.785382    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:44:52.785396    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:44:52.801727    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:44:52.801739    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:44:52.813044    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:44:52.813056    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:44:52.826656    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:44:52.826666    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:44:52.839693    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:44:52.839704    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:44:55.366849    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:45:00.369633    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:45:00.370250    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:45:00.408458    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:45:00.408623    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:45:00.430370    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:45:00.430494    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:45:00.445490    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:45:00.445582    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:45:00.458733    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:45:00.458818    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:45:00.469592    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:45:00.469667    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:45:00.480111    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:45:00.480189    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:45:00.490608    4019 logs.go:276] 0 containers: []
	W0916 10:45:00.490619    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:45:00.490690    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:45:00.500442    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:45:00.500461    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:45:00.500467    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:45:00.539926    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:45:00.539938    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:45:00.553455    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:45:00.553467    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:45:00.564693    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:45:00.564706    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:45:00.576238    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:45:00.576255    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:45:00.597522    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:45:00.597535    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:45:00.608693    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:45:00.608705    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:45:00.620290    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:45:00.620307    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:45:00.637292    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:45:00.637302    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:45:00.650717    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:45:00.650726    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:45:00.661644    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:45:00.661656    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:45:00.672748    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:45:00.672756    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:45:00.684253    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:45:00.684263    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:45:00.709197    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:45:00.709204    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:45:00.720511    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:45:00.720521    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:45:00.724896    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:45:00.724903    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:45:00.764341    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:45:00.764355    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:45:03.283764    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:45:08.286431    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:45:08.287036    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:45:08.328721    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:45:08.328868    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:45:08.349125    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:45:08.349244    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:45:08.364698    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:45:08.364802    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:45:08.380789    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:45:08.380874    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:45:08.391216    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:45:08.391294    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:45:08.401782    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:45:08.401856    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:45:08.411847    4019 logs.go:276] 0 containers: []
	W0916 10:45:08.411860    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:45:08.411926    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:45:08.422203    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:45:08.422220    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:45:08.422226    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:45:08.433661    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:45:08.433675    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:45:08.445367    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:45:08.445380    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:45:08.462773    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:45:08.462786    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:45:08.467326    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:45:08.467334    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:45:08.502620    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:45:08.502636    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:45:08.513828    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:45:08.513838    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:45:08.525462    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:45:08.525473    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:45:08.536522    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:45:08.536536    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:45:08.547773    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:45:08.547783    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:45:08.561725    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:45:08.561735    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:45:08.575313    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:45:08.575321    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:45:08.587556    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:45:08.587574    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:45:08.600168    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:45:08.600176    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:45:08.611314    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:45:08.611324    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:45:08.636399    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:45:08.636408    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:45:08.675024    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:45:08.675034    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:45:11.188880    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:45:16.191365    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:45:16.191540    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:45:16.210273    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:45:16.210344    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:45:16.222879    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:45:16.222945    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:45:16.234029    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:45:16.234113    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:45:16.243952    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:45:16.244033    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:45:16.254086    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:45:16.254152    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:45:16.265499    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:45:16.265566    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:45:16.282204    4019 logs.go:276] 0 containers: []
	W0916 10:45:16.282216    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:45:16.282281    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:45:16.298157    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:45:16.298175    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:45:16.298182    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:45:16.312019    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:45:16.312030    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:45:16.323001    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:45:16.323012    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:45:16.334566    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:45:16.334577    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:45:16.351715    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:45:16.351725    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:45:16.362881    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:45:16.362896    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:45:16.380207    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:45:16.380217    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:45:16.418990    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:45:16.418997    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:45:16.435837    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:45:16.435846    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:45:16.446951    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:45:16.446962    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:45:16.458480    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:45:16.458489    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:45:16.469408    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:45:16.469422    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:45:16.473689    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:45:16.473694    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:45:16.484587    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:45:16.484600    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:45:16.519392    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:45:16.519403    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:45:16.530866    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:45:16.530878    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:45:16.542193    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:45:16.542203    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:45:19.066709    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:45:24.063698    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:45:24.064237    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:45:24.106089    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:45:24.106252    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:45:24.128083    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:45:24.128225    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:45:24.143732    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:45:24.143827    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:45:24.156141    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:45:24.156240    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:45:24.168173    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:45:24.168255    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:45:24.178847    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:45:24.178927    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:45:24.189133    4019 logs.go:276] 0 containers: []
	W0916 10:45:24.189145    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:45:24.189239    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:45:24.199907    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:45:24.199924    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:45:24.199929    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:45:24.211089    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:45:24.211099    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:45:24.221919    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:45:24.221928    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:45:24.233528    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:45:24.233537    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:45:24.237822    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:45:24.237829    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:45:24.271967    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:45:24.271977    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:45:24.283150    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:45:24.283161    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:45:24.294923    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:45:24.294932    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:45:24.312348    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:45:24.312363    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:45:24.324863    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:45:24.324874    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:45:24.338944    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:45:24.338954    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:45:24.350017    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:45:24.350028    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:45:24.371109    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:45:24.371119    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:45:24.383524    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:45:24.383538    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:45:24.397950    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:45:24.397962    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:45:24.411013    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:45:24.411024    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:45:24.450879    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:45:24.450886    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:45:26.976458    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:45:31.975432    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:45:31.976103    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:45:32.016707    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:45:32.016885    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:45:32.038999    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:45:32.039134    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:45:32.054048    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:45:32.054141    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:45:32.066603    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:45:32.066684    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:45:32.082853    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:45:32.082945    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:45:32.093816    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:45:32.093897    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:45:32.104331    4019 logs.go:276] 0 containers: []
	W0916 10:45:32.104346    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:45:32.104420    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:45:32.115334    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:45:32.115352    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:45:32.115358    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:45:32.126513    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:45:32.126526    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:45:32.139657    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:45:32.139671    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:45:32.157137    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:45:32.157149    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:45:32.169630    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:45:32.169641    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:45:32.208953    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:45:32.208967    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:45:32.223801    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:45:32.223811    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:45:32.238451    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:45:32.238463    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:45:32.250646    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:45:32.250657    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:45:32.275546    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:45:32.275555    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:45:32.315134    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:45:32.315142    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:45:32.326480    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:45:32.326491    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:45:32.338186    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:45:32.338197    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:45:32.350186    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:45:32.350197    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:45:32.354921    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:45:32.354928    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:45:32.366923    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:45:32.366936    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:45:32.379638    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:45:32.379654    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:45:34.891940    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:45:39.892282    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:45:39.892884    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:45:39.932203    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:45:39.932386    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:45:39.954455    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:45:39.954588    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:45:39.973980    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:45:39.974077    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:45:39.985894    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:45:39.985984    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:45:39.995953    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:45:39.996040    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:45:40.006750    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:45:40.006839    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:45:40.016918    4019 logs.go:276] 0 containers: []
	W0916 10:45:40.016930    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:45:40.017006    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:45:40.027473    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:45:40.027490    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:45:40.027497    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:45:40.061693    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:45:40.061706    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:45:40.086980    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:45:40.086987    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:45:40.098339    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:45:40.098350    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:45:40.109485    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:45:40.109496    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:45:40.120869    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:45:40.120883    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:45:40.132000    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:45:40.132011    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:45:40.143444    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:45:40.143456    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:45:40.157871    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:45:40.157885    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:45:40.176092    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:45:40.176101    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:45:40.192167    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:45:40.192177    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:45:40.203681    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:45:40.203692    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:45:40.215438    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:45:40.215451    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:45:40.228639    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:45:40.228650    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:45:40.248102    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:45:40.248116    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:45:40.288379    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:45:40.288387    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:45:40.292373    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:45:40.292379    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:45:42.807350    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:45:47.807093    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:45:47.807222    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:45:47.819541    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:45:47.819637    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:45:47.833325    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:45:47.833410    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:45:47.844647    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:45:47.844733    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:45:47.858734    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:45:47.858821    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:45:47.871200    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:45:47.871288    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:45:47.883048    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:45:47.883132    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:45:47.894103    4019 logs.go:276] 0 containers: []
	W0916 10:45:47.894116    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:45:47.894187    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:45:47.905381    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:45:47.905399    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:45:47.905405    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:45:47.944716    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:45:47.944732    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:45:47.960773    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:45:47.960786    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:45:47.979682    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:45:47.979695    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:45:47.992970    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:45:47.992983    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:45:47.998303    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:45:47.998318    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:45:48.020173    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:45:48.020186    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:45:48.037423    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:45:48.037437    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:45:48.054540    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:45:48.054552    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:45:48.082325    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:45:48.082341    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:45:48.097390    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:45:48.097403    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:45:48.115805    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:45:48.115822    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:45:48.128762    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:45:48.128774    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:45:48.141266    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:45:48.141281    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:45:48.182493    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:45:48.182512    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:45:48.194852    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:45:48.194865    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:45:48.211171    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:45:48.211184    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:45:50.725854    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:45:55.727305    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:45:55.727462    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:45:55.741181    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:45:55.741268    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:45:55.752911    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:45:55.752997    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:45:55.763112    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:45:55.763188    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:45:55.773621    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:45:55.773703    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:45:55.784518    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:45:55.784603    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:45:55.794644    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:45:55.794727    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:45:55.804955    4019 logs.go:276] 0 containers: []
	W0916 10:45:55.804966    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:45:55.805040    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:45:55.815508    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:45:55.815529    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:45:55.815535    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:45:55.849157    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:45:55.849170    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:45:55.860535    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:45:55.860547    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:45:55.876330    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:45:55.876346    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:45:55.888292    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:45:55.888304    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:45:55.928500    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:45:55.928520    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:45:55.940779    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:45:55.940793    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:45:55.959013    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:45:55.959028    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:45:55.977077    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:45:55.977089    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:45:55.994788    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:45:55.994801    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:45:56.007117    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:45:56.007130    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:45:56.011501    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:45:56.011510    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:45:56.023510    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:45:56.023521    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:45:56.034785    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:45:56.034797    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:45:56.048155    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:45:56.048165    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:45:56.072238    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:45:56.072248    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:45:56.084331    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:45:56.084346    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:45:58.605064    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:03.606695    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:03.606879    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:03.619767    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:03.619851    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:03.632043    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:03.632122    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:03.642763    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:03.642842    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:03.653622    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:03.653715    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:03.664672    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:03.664758    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:03.680405    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:03.680490    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:03.691059    4019 logs.go:276] 0 containers: []
	W0916 10:46:03.691070    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:03.691143    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:03.702190    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:03.702206    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:03.702213    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:03.740632    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:03.740646    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:03.754487    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:03.754503    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:03.766631    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:03.766648    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:03.778944    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:03.778959    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:03.820726    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:03.820747    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:03.837615    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:03.837629    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:03.857722    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:03.857738    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:03.869890    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:03.869907    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:03.882899    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:03.882914    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:03.895855    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:03.895873    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:03.915585    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:03.915605    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:03.942020    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:03.942031    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:03.946263    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:03.946274    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:03.959421    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:03.959435    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:03.971514    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:03.971528    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:03.988557    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:03.988569    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:46:06.502613    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:11.504528    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:11.504652    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:11.515812    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:11.515889    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:11.530368    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:11.530458    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:11.542631    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:11.542710    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:11.552887    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:11.552963    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:11.563984    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:11.564067    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:11.578332    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:11.578405    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:11.589303    4019 logs.go:276] 0 containers: []
	W0916 10:46:11.589319    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:11.589395    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:11.600022    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:11.600042    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:11.600048    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:11.611607    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:11.611621    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:11.623630    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:11.623643    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:46:11.635107    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:11.635117    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:11.673942    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:11.673952    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:11.678032    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:11.678039    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:11.714453    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:11.714464    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:11.731784    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:11.731796    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:11.757113    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:11.757122    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:11.770460    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:11.770470    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:11.781328    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:11.781342    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:11.793091    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:11.793102    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:11.807449    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:11.807459    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:11.818669    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:11.818683    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:11.829485    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:11.829497    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:11.840888    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:11.840898    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:11.851990    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:11.852005    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:14.365280    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:19.367616    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:19.368226    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:19.407304    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:19.407478    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:19.435988    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:19.436100    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:19.451383    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:19.451478    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:19.463210    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:19.463300    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:19.476437    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:19.476523    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:19.492947    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:19.493028    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:19.503174    4019 logs.go:276] 0 containers: []
	W0916 10:46:19.503189    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:19.503252    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:19.514136    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:19.514153    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:19.514159    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:19.550533    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:19.550546    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:19.565254    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:19.565265    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:19.578969    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:19.578984    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:19.590304    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:19.590316    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:19.601546    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:19.601560    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:19.612693    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:19.612702    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:19.623848    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:19.623860    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:19.628048    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:19.628057    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:46:19.639122    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:19.639132    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:19.650075    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:19.650087    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:19.688926    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:19.688934    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:19.709872    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:19.709886    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:19.725340    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:19.725352    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:19.737330    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:19.737343    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:19.748845    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:19.748855    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:19.773273    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:19.773284    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:22.292253    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:27.294677    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:27.294944    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:27.308938    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:27.309027    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:27.320029    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:27.320108    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:27.330825    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:27.330907    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:27.341308    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:27.341392    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:27.351866    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:27.351935    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:27.362258    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:27.362328    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:27.373127    4019 logs.go:276] 0 containers: []
	W0916 10:46:27.373140    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:27.373218    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:27.383791    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:27.383807    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:27.383814    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:27.395056    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:27.395067    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:27.407251    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:27.407262    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:27.418788    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:27.418800    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:27.429951    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:27.429963    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:27.456312    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:27.456320    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:27.498819    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:27.498831    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:27.503449    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:27.503459    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:27.515009    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:27.515027    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:27.527009    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:27.527020    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:46:27.544339    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:27.544349    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:27.561700    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:27.561712    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:27.597994    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:27.598004    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:27.616208    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:27.616219    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:27.627865    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:27.627878    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:27.643123    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:27.643133    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:27.660751    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:27.660760    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:30.174225    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:35.176508    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:35.177189    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:35.218387    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:35.218566    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:35.239866    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:35.239992    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:35.254357    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:35.254437    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:35.266202    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:35.266284    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:35.276921    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:35.277008    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:35.287099    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:35.287171    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:35.297103    4019 logs.go:276] 0 containers: []
	W0916 10:46:35.297116    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:35.297177    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:35.307307    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:35.307324    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:35.307330    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:35.321115    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:35.321127    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:35.332349    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:35.332359    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:35.343270    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:35.343282    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:35.368359    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:35.368368    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:35.373002    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:35.373012    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:35.384229    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:35.384240    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:35.395946    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:35.395956    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:35.412834    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:35.412843    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:35.425720    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:35.425738    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:35.465966    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:35.465974    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:35.499609    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:35.499621    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:35.510720    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:35.510732    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:35.529178    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:35.529190    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:35.551234    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:35.551248    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:35.567155    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:35.567165    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:35.578873    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:35.578883    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:46:38.090060    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:43.092041    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:43.092211    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:43.104409    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:43.104496    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:43.115076    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:43.115164    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:43.125250    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:43.125338    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:43.135667    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:43.135751    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:43.146279    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:43.146359    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:43.156663    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:43.156742    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:43.167106    4019 logs.go:276] 0 containers: []
	W0916 10:46:43.167117    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:43.167188    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:43.178247    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:43.178264    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:43.178270    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:43.189735    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:43.189747    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:43.227400    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:43.227411    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:43.241486    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:43.241496    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:43.252854    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:43.252869    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:43.266299    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:43.266310    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:43.278139    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:43.278154    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:43.295444    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:43.295454    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:43.306560    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:43.306574    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:43.318187    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:43.318202    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:43.329250    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:43.329263    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:43.369076    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:43.369083    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:43.373327    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:43.373337    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:43.385306    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:43.385316    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:46:43.396902    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:43.396915    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:43.420047    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:43.420054    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:43.432042    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:43.432054    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:45.945780    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:50.947809    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:50.948169    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:50.975854    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:50.976004    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:50.993417    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:50.993515    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:51.006498    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:51.006592    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:51.018442    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:51.018532    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:51.029361    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:51.029446    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:51.039966    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:51.040040    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:51.050678    4019 logs.go:276] 0 containers: []
	W0916 10:46:51.050693    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:51.050777    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:51.061114    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:51.061132    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:51.061139    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:51.098350    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:51.098364    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:51.111376    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:51.111389    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:51.123682    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:51.123697    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:51.135776    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:51.135789    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:51.153534    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:51.153552    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:51.164676    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:51.164689    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:46:51.175846    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:51.175860    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:51.214640    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:51.214649    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:51.227960    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:51.227973    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:51.253792    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:51.253814    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:51.268332    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:51.268350    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:51.280622    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:51.280635    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:51.291960    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:51.291970    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:51.306398    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:51.306414    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:51.317876    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:51.317888    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:51.329793    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:51.329808    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:53.836580    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:58.838653    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:58.838765    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:58.849743    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:58.849835    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:58.860646    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:58.860741    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:58.871432    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:58.871509    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:58.882592    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:58.882679    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:58.893613    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:58.893699    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:58.908133    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:58.908213    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:58.917829    4019 logs.go:276] 0 containers: []
	W0916 10:46:58.917839    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:58.917912    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:58.928259    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:58.928276    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:58.928282    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:58.942535    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:58.942546    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:58.966776    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:58.966789    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:58.985655    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:58.985666    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:58.997257    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:58.997278    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:59.014782    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:59.014796    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:59.027732    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:59.027747    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:59.049962    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:59.049969    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:59.061140    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:59.061150    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:59.072497    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:59.072508    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:59.083657    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:59.083669    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:59.094958    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:59.094970    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:59.106103    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:59.106114    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:59.118002    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:59.118013    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:59.157866    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:59.157874    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:59.162419    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:59.162427    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:59.196843    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:59.196853    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:47:01.709134    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:06.711191    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:06.711329    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:47:06.723988    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:47:06.724088    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:47:06.735055    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:47:06.735151    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:47:06.746039    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:47:06.746126    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:47:06.756737    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:47:06.756839    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:47:06.775992    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:47:06.776076    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:47:06.787077    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:47:06.787159    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:47:06.813202    4019 logs.go:276] 0 containers: []
	W0916 10:47:06.813218    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:47:06.813298    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:47:06.833101    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:47:06.833125    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:47:06.833133    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:47:06.855007    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:47:06.855018    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:47:06.866476    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:47:06.866491    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:47:06.880699    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:47:06.880710    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:47:06.892292    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:47:06.892301    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:47:06.904946    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:47:06.904960    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:47:06.916457    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:47:06.916468    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:47:06.939637    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:47:06.939648    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:47:06.951394    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:47:06.951406    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:47:06.989090    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:47:06.989106    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:47:07.007844    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:47:07.007853    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:47:07.025310    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:47:07.025322    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:47:07.037907    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:47:07.037920    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:47:07.049188    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:47:07.049199    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:47:07.060054    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:47:07.060068    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:47:07.098754    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:47:07.098764    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:47:07.103459    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:47:07.103468    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:47:09.617375    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:14.619889    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:14.620136    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:47:14.643917    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:47:14.644042    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:47:14.659315    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:47:14.659393    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:47:14.671532    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:47:14.671602    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:47:14.683072    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:47:14.683156    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:47:14.693389    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:47:14.693465    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:47:14.704231    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:47:14.704302    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:47:14.717026    4019 logs.go:276] 0 containers: []
	W0916 10:47:14.717039    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:47:14.717104    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:47:14.727992    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:47:14.728010    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:47:14.728015    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:47:14.742137    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:47:14.742147    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:47:14.754529    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:47:14.754541    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:47:14.766235    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:47:14.766248    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:47:14.777861    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:47:14.777873    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:47:14.782266    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:47:14.782277    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:47:14.793476    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:47:14.793489    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:47:14.805596    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:47:14.805605    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:47:14.816868    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:47:14.816881    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:47:14.830200    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:47:14.830210    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:47:14.842190    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:47:14.842200    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:47:14.880024    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:47:14.880030    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:47:14.915532    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:47:14.915544    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:47:14.929743    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:47:14.929752    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:47:14.941025    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:47:14.941037    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:47:14.964835    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:47:14.964841    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:47:14.983623    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:47:14.983632    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:47:17.502860    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:22.505522    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:22.505615    4019 kubeadm.go:597] duration metric: took 4m4.83813525s to restartPrimaryControlPlane
	W0916 10:47:22.505689    4019 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0916 10:47:22.505732    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0916 10:47:23.411457    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:47:23.416432    4019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:47:23.419173    4019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:47:23.422085    4019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:47:23.422091    4019 kubeadm.go:157] found existing configuration files:
	
	I0916 10:47:23.422124    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/admin.conf
	I0916 10:47:23.424750    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:47:23.424785    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:47:23.427834    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/kubelet.conf
	I0916 10:47:23.430608    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:47:23.430639    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:47:23.433298    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/controller-manager.conf
	I0916 10:47:23.436472    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:47:23.436497    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:47:23.439830    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/scheduler.conf
	I0916 10:47:23.442644    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:47:23.442669    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:47:23.445060    4019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:47:23.462156    4019 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0916 10:47:23.462190    4019 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:47:23.513446    4019 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:47:23.513527    4019 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:47:23.513586    4019 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 10:47:23.566716    4019 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:47:23.574876    4019 out.go:235]   - Generating certificates and keys ...
	I0916 10:47:23.574910    4019 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:47:23.574940    4019 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:47:23.574990    4019 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0916 10:47:23.575020    4019 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0916 10:47:23.575059    4019 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0916 10:47:23.575088    4019 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0916 10:47:23.575128    4019 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0916 10:47:23.575161    4019 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0916 10:47:23.575216    4019 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0916 10:47:23.575250    4019 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0916 10:47:23.575269    4019 kubeadm.go:310] [certs] Using the existing "sa" key
	I0916 10:47:23.575299    4019 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:47:23.626726    4019 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:47:23.704362    4019 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:47:23.891584    4019 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:47:23.966274    4019 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:47:23.994170    4019 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:47:23.994598    4019 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:47:23.994619    4019 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:47:24.081817    4019 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:47:24.088932    4019 out.go:235]   - Booting up control plane ...
	I0916 10:47:24.088985    4019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:47:24.089021    4019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:47:24.089088    4019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:47:24.089126    4019 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:47:24.089206    4019 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 10:47:28.589093    4019 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504560 seconds
	I0916 10:47:28.589185    4019 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:47:28.596500    4019 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:47:29.107598    4019 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:47:29.107768    4019 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-707000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:47:29.611473    4019 kubeadm.go:310] [bootstrap-token] Using token: me8yh3.v7xmqm9syeoc3hay
	I0916 10:47:29.617847    4019 out.go:235]   - Configuring RBAC rules ...
	I0916 10:47:29.617907    4019 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:47:29.617961    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:47:29.622495    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:47:29.623410    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:47:29.624266    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:47:29.625114    4019 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:47:29.628226    4019 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:47:29.804831    4019 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:47:30.015465    4019 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:47:30.016122    4019 kubeadm.go:310] 
	I0916 10:47:30.016159    4019 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:47:30.016162    4019 kubeadm.go:310] 
	I0916 10:47:30.016239    4019 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:47:30.016247    4019 kubeadm.go:310] 
	I0916 10:47:30.016262    4019 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:47:30.016290    4019 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:47:30.016314    4019 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:47:30.016317    4019 kubeadm.go:310] 
	I0916 10:47:30.016343    4019 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:47:30.016347    4019 kubeadm.go:310] 
	I0916 10:47:30.016384    4019 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:47:30.016388    4019 kubeadm.go:310] 
	I0916 10:47:30.016442    4019 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:47:30.016508    4019 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:47:30.016570    4019 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:47:30.016577    4019 kubeadm.go:310] 
	I0916 10:47:30.016652    4019 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:47:30.016694    4019 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:47:30.016712    4019 kubeadm.go:310] 
	I0916 10:47:30.016752    4019 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token me8yh3.v7xmqm9syeoc3hay \
	I0916 10:47:30.016801    4019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4cbf98c9db407bfd377513d8a979980a7165b5a1a5b1a669b5a690e8302fdda \
	I0916 10:47:30.016834    4019 kubeadm.go:310] 	--control-plane 
	I0916 10:47:30.016840    4019 kubeadm.go:310] 
	I0916 10:47:30.016887    4019 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:47:30.016891    4019 kubeadm.go:310] 
	I0916 10:47:30.016930    4019 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token me8yh3.v7xmqm9syeoc3hay \
	I0916 10:47:30.017011    4019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4cbf98c9db407bfd377513d8a979980a7165b5a1a5b1a669b5a690e8302fdda 
	I0916 10:47:30.017069    4019 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:47:30.017077    4019 cni.go:84] Creating CNI manager for ""
	I0916 10:47:30.017085    4019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:47:30.020695    4019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:47:30.027706    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:47:30.032351    4019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:47:30.037238    4019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:47:30.037291    4019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:47:30.037309    4019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-707000 minikube.k8s.io/updated_at=2024_09_16T10_47_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=running-upgrade-707000 minikube.k8s.io/primary=true
	I0916 10:47:30.075582    4019 ops.go:34] apiserver oom_adj: -16
	I0916 10:47:30.075592    4019 kubeadm.go:1113] duration metric: took 38.3485ms to wait for elevateKubeSystemPrivileges
	I0916 10:47:30.075601    4019 kubeadm.go:394] duration metric: took 4m12.438076083s to StartCluster
	I0916 10:47:30.075612    4019 settings.go:142] acquiring lock: {Name:mkcc144e0c413dd8611ee3ccbc8c08f02650f2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:47:30.075701    4019 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:47:30.076141    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/kubeconfig: {Name:mk3766c19461825f7de68cf1dc4ddceadf57e288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:47:30.076346    4019 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:47:30.076392    4019 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:47:30.076429    4019 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-707000"
	I0916 10:47:30.076436    4019 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-707000"
	W0916 10:47:30.076440    4019 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:47:30.076441    4019 config.go:182] Loaded profile config "running-upgrade-707000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:47:30.076449    4019 host.go:66] Checking if "running-upgrade-707000" exists ...
	I0916 10:47:30.076475    4019 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-707000"
	I0916 10:47:30.076484    4019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-707000"
	I0916 10:47:30.076709    4019 retry.go:31] will retry after 855.309173ms: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/running-upgrade-707000/monitor: connect: connection refused
	I0916 10:47:30.077432    4019 kapi.go:59] client config for running-upgrade-707000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/client.key", CAFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10285d800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:47:30.077549    4019 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-707000"
	W0916 10:47:30.077554    4019 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:47:30.077561    4019 host.go:66] Checking if "running-upgrade-707000" exists ...
	I0916 10:47:30.078081    4019 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:47:30.078087    4019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:47:30.078092    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/running-upgrade-707000/id_rsa Username:docker}
	I0916 10:47:30.079763    4019 out.go:177] * Verifying Kubernetes components...
	I0916 10:47:30.087756    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:47:30.183782    4019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:47:30.188902    4019 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:47:30.188953    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:47:30.193241    4019 api_server.go:72] duration metric: took 116.88875ms to wait for apiserver process to appear ...
	I0916 10:47:30.193248    4019 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:47:30.193254    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:30.248882    4019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:47:30.534839    4019 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:47:30.534851    4019 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:47:30.939712    4019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:47:30.943701    4019 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:47:30.943707    4019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:47:30.943716    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/running-upgrade-707000/id_rsa Username:docker}
	I0916 10:47:30.983321    4019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:47:35.194385    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:35.194423    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:40.195026    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:40.195063    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:45.195580    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:45.195601    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:50.195828    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:50.195873    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:55.196275    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:55.196315    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:00.196558    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:00.196585    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0916 10:48:00.535723    4019 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0916 10:48:00.539895    4019 out.go:177] * Enabled addons: storage-provisioner
	I0916 10:48:00.550885    4019 addons.go:510] duration metric: took 30.475404333s for enable addons: enabled=[storage-provisioner]
	I0916 10:48:05.197204    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:05.197267    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:10.198215    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:10.198246    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:15.199384    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:15.199406    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:20.200628    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:20.200651    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:25.202033    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:25.202109    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:30.204367    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:30.204482    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:30.222153    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:48:30.222246    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:30.238815    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:48:30.238898    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:30.250032    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:48:30.250118    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:30.263547    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:48:30.263633    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:30.274353    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:48:30.274440    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:30.285721    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:48:30.285803    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:30.301592    4019 logs.go:276] 0 containers: []
	W0916 10:48:30.301603    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:30.301669    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:30.315076    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:48:30.315090    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:30.315098    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:30.350969    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:48:30.350977    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:48:30.364913    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:48:30.364923    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:48:30.378848    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:48:30.378859    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:48:30.390889    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:48:30.390900    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:48:30.408604    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:48:30.408614    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:48:30.424503    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:30.424513    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:30.448225    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:48:30.448233    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:30.460547    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:30.460558    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:30.465379    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:30.465387    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:30.503393    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:48:30.503405    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:48:30.515338    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:48:30.515349    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:48:30.534217    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:48:30.534231    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:48:33.048301    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:38.050937    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:38.051470    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:38.081171    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:48:38.081330    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:38.098828    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:48:38.098934    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:38.112555    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:48:38.112641    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:38.124094    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:48:38.124199    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:38.134709    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:48:38.134790    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:38.144850    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:48:38.144937    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:38.154961    4019 logs.go:276] 0 containers: []
	W0916 10:48:38.154972    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:38.155045    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:38.165159    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:48:38.165172    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:48:38.165178    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:48:38.182789    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:48:38.182805    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:38.194743    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:38.194753    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:38.199003    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:48:38.199009    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:48:38.212950    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:48:38.212961    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:48:38.224121    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:48:38.224132    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:48:38.239711    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:48:38.239722    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:48:38.251468    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:38.251478    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:38.275131    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:38.275142    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:38.312421    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:38.312440    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:38.351072    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:48:38.351084    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:48:38.366134    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:48:38.366148    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:48:38.378614    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:48:38.378628    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:48:40.892225    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:45.894419    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:45.894724    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:45.920281    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:48:45.920427    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:45.937279    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:48:45.937370    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:45.950650    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:48:45.950743    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:45.961520    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:48:45.961600    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:45.971850    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:48:45.971931    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:45.982046    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:48:45.982128    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:45.991796    4019 logs.go:276] 0 containers: []
	W0916 10:48:45.991807    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:45.991883    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:46.002589    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:48:46.002602    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:46.002608    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:46.037776    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:46.037783    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:46.075420    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:48:46.075434    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:48:46.089270    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:48:46.089283    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:48:46.101545    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:48:46.101560    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:48:46.121700    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:48:46.121712    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:48:46.139169    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:46.139182    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:46.163186    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:46.163196    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:46.170248    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:48:46.170254    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:48:46.184697    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:48:46.184710    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:48:46.196740    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:48:46.196751    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:48:46.208039    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:48:46.208049    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:48:46.222706    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:48:46.222717    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:48.734776    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:53.737485    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:53.737830    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:53.763783    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:48:53.763892    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:53.779713    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:48:53.779851    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:53.793342    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:48:53.793436    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:53.808103    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:48:53.808186    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:53.819065    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:48:53.819153    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:53.830759    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:48:53.830840    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:53.845951    4019 logs.go:276] 0 containers: []
	W0916 10:48:53.845963    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:53.846034    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:53.858811    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:48:53.858827    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:48:53.858835    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:48:53.870859    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:53.870872    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:53.908638    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:48:53.908654    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:48:53.924340    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:48:53.924351    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:48:53.938173    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:48:53.938184    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:48:53.951205    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:48:53.951215    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:48:53.962481    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:48:53.962490    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:48:53.981083    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:53.981096    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:54.005873    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:48:54.005883    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:54.017149    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:54.017159    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:54.053668    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:54.053677    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:54.057898    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:48:54.057907    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:48:54.069092    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:48:54.069103    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:48:56.585764    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:01.587447    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:01.587644    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:01.602554    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:01.602652    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:01.614393    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:01.614477    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:01.625137    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:49:01.625214    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:01.635658    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:01.635740    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:01.646530    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:01.646622    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:01.657048    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:01.657127    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:01.667131    4019 logs.go:276] 0 containers: []
	W0916 10:49:01.667142    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:01.667212    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:01.677592    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:01.677608    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:01.677614    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:01.682111    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:01.682118    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:01.696783    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:01.696792    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:01.711141    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:01.711151    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:01.735985    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:01.735992    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:01.753796    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:01.753811    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:01.791051    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:01.791061    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:01.832180    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:01.832196    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:01.847277    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:01.847286    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:01.862161    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:01.862170    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:01.873998    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:01.874011    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:01.885427    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:01.885437    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:01.897695    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:01.897710    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:04.411752    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:09.414298    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:09.414605    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:09.441191    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:09.441343    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:09.458316    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:09.458410    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:09.471538    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:49:09.471626    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:09.483980    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:09.484076    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:09.495908    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:09.496004    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:09.507155    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:09.507253    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:09.519311    4019 logs.go:276] 0 containers: []
	W0916 10:49:09.519325    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:09.519407    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:09.531341    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:09.531357    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:09.531363    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:09.543774    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:09.543788    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:09.579983    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:09.579995    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:09.594639    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:09.594651    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:09.609011    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:09.609022    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:09.620456    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:09.620467    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:09.639029    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:09.639040    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:09.662628    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:09.662636    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:09.674420    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:09.674431    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:09.710672    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:09.710680    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:09.714894    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:09.714902    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:09.731548    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:09.731564    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:09.743936    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:09.743947    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:12.264143    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:17.264652    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:17.264882    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:17.280199    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:17.280301    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:17.292933    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:17.293026    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:17.304127    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:49:17.304212    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:17.313972    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:17.314056    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:17.324045    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:17.324126    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:17.334826    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:17.334910    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:17.345833    4019 logs.go:276] 0 containers: []
	W0916 10:49:17.345849    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:17.345922    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:17.356345    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:17.356361    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:17.356367    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:17.370049    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:17.370059    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:17.383838    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:17.383852    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:17.398524    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:17.398534    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:17.410674    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:17.410688    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:17.422084    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:17.422095    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:17.446890    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:17.446898    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:17.451712    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:17.451721    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:17.486048    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:17.486059    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:17.497795    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:17.497806    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:17.508980    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:17.508992    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:17.536937    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:17.536949    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:17.549475    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:17.549488    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:20.088971    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:25.090758    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:25.090903    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:25.105225    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:25.105320    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:25.118066    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:25.118147    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:25.128994    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:49:25.129076    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:25.139502    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:25.139587    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:25.150445    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:25.150531    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:25.160902    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:25.160989    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:25.172708    4019 logs.go:276] 0 containers: []
	W0916 10:49:25.172720    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:25.172796    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:25.187869    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:25.187885    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:25.187891    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:25.205084    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:25.205096    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:25.222307    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:25.222317    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:25.234233    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:25.234244    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:25.239273    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:25.239283    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:25.253235    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:25.253247    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:25.268006    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:25.268017    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:25.280047    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:25.280059    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:25.291768    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:25.291779    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:25.303336    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:25.303349    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:25.326218    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:25.326224    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:25.337568    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:25.337583    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:25.373189    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:25.373198    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:27.915365    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:32.917497    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:32.917714    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:32.932452    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:32.932547    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:32.945141    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:32.945229    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:32.956089    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:49:32.956172    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:32.966381    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:32.966460    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:32.977971    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:32.978053    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:32.995925    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:32.996010    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:33.005938    4019 logs.go:276] 0 containers: []
	W0916 10:49:33.005949    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:33.006010    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:33.016215    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:33.016229    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:33.016236    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:33.027218    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:33.027228    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:33.038865    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:33.038878    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:33.056383    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:33.056395    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:33.091860    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:33.091868    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:33.109249    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:33.109259    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:33.123499    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:33.123509    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:33.137632    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:33.137645    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:33.149176    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:33.149187    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:33.174402    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:33.174411    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:33.185993    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:33.186004    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:33.190791    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:33.190797    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:33.226026    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:33.226037    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:35.739786    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:40.741886    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:40.742025    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:40.753578    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:40.753669    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:40.763995    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:40.764088    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:40.774520    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:49:40.774595    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:40.785284    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:40.785370    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:40.799018    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:40.799106    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:40.809714    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:40.809787    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:40.819984    4019 logs.go:276] 0 containers: []
	W0916 10:49:40.819996    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:40.820060    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:40.830726    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:40.830740    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:40.830745    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:40.848062    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:40.848074    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:40.884285    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:40.884299    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:40.889098    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:40.889106    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:40.903093    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:40.903103    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:40.919618    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:40.919630    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:40.931000    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:40.931010    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:40.942447    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:40.942456    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:40.954212    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:40.954225    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:40.978448    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:40.978458    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:41.013045    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:41.013055    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:41.032098    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:41.032108    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:41.043564    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:41.043577    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:43.557292    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:48.559411    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:48.559692    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:48.581357    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:48.581483    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:48.596936    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:48.597032    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:48.609453    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:49:48.609542    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:48.620239    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:48.620318    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:48.630439    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:48.630520    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:48.641284    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:48.641369    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:48.651669    4019 logs.go:276] 0 containers: []
	W0916 10:49:48.651680    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:48.651745    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:48.662189    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:48.662206    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:48.662212    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:48.675855    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:49:48.675865    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:49:48.687611    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:48.687622    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:48.704406    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:48.704420    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:48.731825    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:48.731838    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:48.737968    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:48.737980    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:48.775526    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:48.775537    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:48.790268    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:48.790277    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:48.807740    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:48.807754    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:48.821645    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:48.821657    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:48.833265    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:48.833281    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:48.844810    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:49:48.844823    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:49:48.856456    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:48.856466    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:48.867359    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:48.867373    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:48.902598    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:48.902609    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:51.416479    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:56.418740    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:56.419363    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:56.457852    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:56.458021    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:56.479648    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:56.479769    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:56.494411    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:49:56.494497    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:56.506450    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:56.506529    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:56.516933    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:56.517013    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:56.527666    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:56.527748    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:56.537957    4019 logs.go:276] 0 containers: []
	W0916 10:49:56.537968    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:56.538038    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:56.554779    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:56.554801    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:56.554808    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:56.566596    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:56.566607    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:56.578500    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:56.578511    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:56.595488    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:56.595500    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:56.619530    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:56.619538    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:56.631235    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:56.631245    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:56.645544    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:56.645554    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:56.649774    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:56.649782    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:56.685602    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:56.685617    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:56.700263    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:56.700273    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:56.714191    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:56.714207    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:56.749862    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:56.749871    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:56.774455    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:49:56.774467    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:49:56.796315    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:49:56.796328    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:49:56.812410    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:56.812422    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:59.327488    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:04.329982    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:04.330168    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:04.346596    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:04.346694    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:04.359364    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:04.359442    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:04.372021    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:04.372106    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:04.382686    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:04.382767    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:04.392928    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:04.393005    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:04.403805    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:04.403882    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:04.414120    4019 logs.go:276] 0 containers: []
	W0916 10:50:04.414130    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:04.414203    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:04.423946    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:04.423963    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:04.423968    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:04.438166    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:04.438180    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:04.449891    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:04.449906    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:04.461551    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:04.461564    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:04.479689    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:04.479703    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:04.491318    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:04.491332    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:04.526873    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:04.526881    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:04.562752    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:04.562768    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:04.574880    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:04.574890    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:04.599524    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:04.599531    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:04.604331    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:04.604340    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:04.615598    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:04.615610    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:04.640423    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:04.640436    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:04.654888    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:04.654899    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:04.670287    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:04.670298    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:07.184584    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:12.186320    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:12.186567    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:12.212657    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:12.212813    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:12.232050    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:12.232145    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:12.247444    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:12.247533    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:12.263016    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:12.263093    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:12.273624    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:12.273705    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:12.284186    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:12.284262    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:12.294161    4019 logs.go:276] 0 containers: []
	W0916 10:50:12.294176    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:12.294249    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:12.304639    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:12.304655    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:12.304661    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:12.343846    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:12.343856    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:12.358426    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:12.358436    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:12.394342    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:12.394355    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:12.408803    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:12.408815    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:12.420170    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:12.420184    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:12.431783    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:12.431796    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:12.443732    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:12.443746    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:12.448674    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:12.448682    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:12.460178    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:12.460191    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:12.477739    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:12.477752    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:12.489425    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:12.489441    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:12.505570    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:12.505581    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:12.519622    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:12.519633    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:12.531212    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:12.531223    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:15.059349    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:20.061769    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:20.061954    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:20.081968    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:20.082079    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:20.098734    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:20.098823    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:20.110914    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:20.110990    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:20.122289    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:20.122372    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:20.133038    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:20.133113    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:20.143237    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:20.143316    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:20.153607    4019 logs.go:276] 0 containers: []
	W0916 10:50:20.153618    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:20.153682    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:20.170077    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:20.170097    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:20.170104    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:20.182976    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:20.182990    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:20.198620    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:20.198631    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:20.216249    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:20.216260    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:20.228796    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:20.228810    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:20.240582    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:20.240595    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:20.262925    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:20.262936    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:20.274260    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:20.274271    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:20.285673    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:20.285683    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:20.299494    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:20.299504    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:20.336521    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:20.336528    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:20.340911    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:20.340918    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:20.375749    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:20.375762    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:20.390194    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:20.390204    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:20.401642    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:20.401652    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:22.927604    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:27.930107    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:27.930403    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:27.956285    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:27.956438    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:27.991505    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:27.991587    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:28.007932    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:28.008020    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:28.023878    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:28.023966    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:28.035280    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:28.035361    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:28.048854    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:28.048938    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:28.060723    4019 logs.go:276] 0 containers: []
	W0916 10:50:28.060736    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:28.060807    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:28.070898    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:28.070914    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:28.070920    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:28.106344    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:28.106355    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:28.121057    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:28.121068    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:28.138274    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:28.138291    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:28.149689    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:28.149699    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:28.153869    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:28.153878    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:28.167203    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:28.167219    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:28.180487    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:28.180500    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:28.192577    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:28.192588    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:28.207031    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:28.207045    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:28.218978    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:28.218990    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:28.242837    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:28.242847    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:28.278392    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:28.278401    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:28.299919    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:28.299931    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:28.311569    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:28.311583    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:30.825242    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:35.827652    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:35.827905    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:35.853080    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:35.853202    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:35.868834    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:35.868920    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:35.882267    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:35.882341    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:35.893489    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:35.893570    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:35.904475    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:35.904557    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:35.915349    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:35.915429    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:35.925487    4019 logs.go:276] 0 containers: []
	W0916 10:50:35.925499    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:35.925571    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:35.935731    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:35.935749    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:35.935755    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:35.961201    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:35.961212    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:35.972782    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:35.972794    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:35.984161    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:35.984172    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:35.999441    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:35.999456    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:36.011920    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:36.011932    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:36.025431    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:36.025442    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:36.037428    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:36.037440    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:36.051065    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:36.051077    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:36.062756    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:36.062768    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:36.074281    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:36.074293    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:36.097611    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:36.097622    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:36.133204    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:36.133214    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:36.137552    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:36.137559    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:36.177017    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:36.177028    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:38.692711    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:43.693648    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:43.693853    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:43.707759    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:43.707849    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:43.725702    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:43.725789    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:43.736408    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:43.736487    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:43.747111    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:43.747204    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:43.757813    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:43.757899    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:43.768586    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:43.768669    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:43.778929    4019 logs.go:276] 0 containers: []
	W0916 10:50:43.778940    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:43.779014    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:43.789388    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:43.789405    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:43.789411    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:43.803662    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:43.803672    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:43.829040    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:43.829050    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:43.833734    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:43.833741    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:43.870759    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:43.870777    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:43.882815    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:43.882826    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:43.900682    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:43.900692    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:43.913401    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:43.913413    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:43.928373    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:43.928386    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:43.940212    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:43.940221    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:43.952325    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:43.952336    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:43.989286    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:43.989296    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:44.003692    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:44.003707    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:44.015584    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:44.015595    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:44.027149    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:44.027159    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:46.544557    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:51.546594    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:51.546692    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:51.557650    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:51.557737    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:51.568034    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:51.568118    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:51.579834    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:51.579919    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:51.590841    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:51.590930    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:51.601394    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:51.601477    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:51.614910    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:51.614992    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:51.626012    4019 logs.go:276] 0 containers: []
	W0916 10:50:51.626024    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:51.626101    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:51.637214    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:51.637235    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:51.637242    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:51.642274    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:51.642287    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:51.660673    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:51.660686    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:51.675803    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:51.675815    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:51.687441    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:51.687452    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:51.699541    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:51.699552    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:51.725434    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:51.725442    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:51.738706    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:51.738717    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:51.777276    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:51.777285    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:51.789242    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:51.789253    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:51.801763    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:51.801777    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:51.818337    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:51.818354    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:51.835550    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:51.835564    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:51.873899    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:51.873911    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:51.887982    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:51.887993    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:54.402750    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:59.404814    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:59.404938    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:59.416045    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:59.416132    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:59.426889    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:59.426977    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:59.437835    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:59.437917    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:59.448687    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:59.448778    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:59.465287    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:59.465368    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:59.478682    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:59.478764    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:59.489600    4019 logs.go:276] 0 containers: []
	W0916 10:50:59.489616    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:59.489688    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:59.501002    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:59.501019    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:59.501025    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:59.516033    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:59.516042    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:59.528508    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:59.528519    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:59.541332    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:59.541346    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:59.553774    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:59.553785    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:59.558222    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:59.558228    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:59.595517    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:59.595529    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:59.609560    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:59.609576    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:59.625462    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:59.625475    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:59.643598    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:59.643611    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:59.667913    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:59.667924    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:59.703634    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:59.703648    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:59.715416    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:59.715428    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:59.727315    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:59.727325    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:59.738678    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:59.738688    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:51:02.252331    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:07.254586    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:07.254726    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:51:07.276975    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:51:07.277069    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:51:07.292233    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:51:07.292313    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:51:07.302665    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:51:07.302752    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:51:07.313575    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:51:07.313658    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:51:07.324059    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:51:07.324139    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:51:07.335082    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:51:07.335167    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:51:07.354887    4019 logs.go:276] 0 containers: []
	W0916 10:51:07.354898    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:51:07.354971    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:51:07.365532    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:51:07.365550    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:51:07.365556    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:51:07.385364    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:51:07.385375    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:51:07.389981    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:51:07.389987    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:51:07.426558    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:51:07.426574    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:51:07.441027    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:51:07.441038    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:51:07.453508    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:51:07.453519    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:51:07.465696    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:51:07.465709    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:51:07.478078    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:51:07.478091    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:51:07.516122    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:51:07.516133    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:51:07.530877    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:51:07.530889    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:51:07.542776    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:51:07.542787    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:51:07.554591    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:51:07.554602    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:51:07.579398    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:51:07.579406    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:51:07.591379    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:51:07.591388    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:51:07.608847    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:51:07.608860    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:51:10.122576    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:15.123603    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:15.123846    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:51:15.145058    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:51:15.145171    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:51:15.160425    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:51:15.160518    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:51:15.172883    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:51:15.172976    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:51:15.184835    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:51:15.184925    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:51:15.195370    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:51:15.195452    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:51:15.206432    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:51:15.206514    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:51:15.217457    4019 logs.go:276] 0 containers: []
	W0916 10:51:15.217468    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:51:15.217541    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:51:15.228239    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:51:15.228258    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:51:15.228263    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:51:15.232787    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:51:15.232794    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:51:15.248314    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:51:15.248325    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:51:15.262384    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:51:15.262397    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:51:15.275522    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:51:15.275533    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:51:15.298100    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:51:15.298115    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:51:15.323101    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:51:15.323115    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:51:15.334804    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:51:15.334818    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:51:15.369511    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:51:15.369526    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:51:15.387716    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:51:15.387726    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:51:15.399770    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:51:15.399780    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:51:15.411649    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:51:15.411661    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:51:15.426799    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:51:15.426811    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:51:15.439062    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:51:15.439074    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:51:15.451310    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:51:15.451321    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:51:17.989023    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:22.991323    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:22.991562    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:51:23.018793    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:51:23.018930    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:51:23.036392    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:51:23.036489    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:51:23.049153    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:51:23.049246    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:51:23.061551    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:51:23.061654    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:51:23.072126    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:51:23.072196    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:51:23.082993    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:51:23.083061    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:51:23.093198    4019 logs.go:276] 0 containers: []
	W0916 10:51:23.093211    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:51:23.093280    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:51:23.103584    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:51:23.103599    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:51:23.103605    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:51:23.114937    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:51:23.114948    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:51:23.138420    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:51:23.138427    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:51:23.173221    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:51:23.173233    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:51:23.187281    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:51:23.187292    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:51:23.198888    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:51:23.198899    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:51:23.217004    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:51:23.217015    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:51:23.230071    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:51:23.230081    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:51:23.266476    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:51:23.266490    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:51:23.271712    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:51:23.271718    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:51:23.286053    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:51:23.286068    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:51:23.297457    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:51:23.297467    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:51:23.314856    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:51:23.314869    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:51:23.333887    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:51:23.333896    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:51:23.345283    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:51:23.345295    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:51:25.857497    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:30.859559    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:30.865086    4019 out.go:201] 
	W0916 10:51:30.869040    4019 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0916 10:51:30.869048    4019 out.go:270] * 
	* 
	W0916 10:51:30.869647    4019 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:51:30.882955    4019 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-707000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-16 10:51:30.979578 -0700 PDT m=+2829.164798751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-707000 -n running-upgrade-707000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-707000 -n running-upgrade-707000: exit status 2 (15.605478s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-707000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-626000          | force-systemd-flag-626000 | jenkins | v1.34.0 | 16 Sep 24 10:41 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-836000              | force-systemd-env-836000  | jenkins | v1.34.0 | 16 Sep 24 10:41 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-836000           | force-systemd-env-836000  | jenkins | v1.34.0 | 16 Sep 24 10:41 PDT | 16 Sep 24 10:41 PDT |
	| start   | -p docker-flags-534000                | docker-flags-534000       | jenkins | v1.34.0 | 16 Sep 24 10:41 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-626000             | force-systemd-flag-626000 | jenkins | v1.34.0 | 16 Sep 24 10:41 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-626000          | force-systemd-flag-626000 | jenkins | v1.34.0 | 16 Sep 24 10:41 PDT | 16 Sep 24 10:41 PDT |
	| start   | -p cert-expiration-913000             | cert-expiration-913000    | jenkins | v1.34.0 | 16 Sep 24 10:41 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-534000 ssh               | docker-flags-534000       | jenkins | v1.34.0 | 16 Sep 24 10:41 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-534000 ssh               | docker-flags-534000       | jenkins | v1.34.0 | 16 Sep 24 10:41 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-534000                | docker-flags-534000       | jenkins | v1.34.0 | 16 Sep 24 10:41 PDT | 16 Sep 24 10:41 PDT |
	| start   | -p cert-options-161000                | cert-options-161000       | jenkins | v1.34.0 | 16 Sep 24 10:41 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-161000 ssh               | cert-options-161000       | jenkins | v1.34.0 | 16 Sep 24 10:42 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-161000 -- sudo        | cert-options-161000       | jenkins | v1.34.0 | 16 Sep 24 10:42 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-161000                | cert-options-161000       | jenkins | v1.34.0 | 16 Sep 24 10:42 PDT | 16 Sep 24 10:42 PDT |
	| start   | -p running-upgrade-707000             | minikube                  | jenkins | v1.26.0 | 16 Sep 24 10:42 PDT | 16 Sep 24 10:43 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-707000             | running-upgrade-707000    | jenkins | v1.34.0 | 16 Sep 24 10:43 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-913000             | cert-expiration-913000    | jenkins | v1.34.0 | 16 Sep 24 10:45 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-913000             | cert-expiration-913000    | jenkins | v1.34.0 | 16 Sep 24 10:45 PDT | 16 Sep 24 10:45 PDT |
	| start   | -p kubernetes-upgrade-153000          | kubernetes-upgrade-153000 | jenkins | v1.34.0 | 16 Sep 24 10:45 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-153000          | kubernetes-upgrade-153000 | jenkins | v1.34.0 | 16 Sep 24 10:45 PDT | 16 Sep 24 10:45 PDT |
	| start   | -p kubernetes-upgrade-153000          | kubernetes-upgrade-153000 | jenkins | v1.34.0 | 16 Sep 24 10:45 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-153000          | kubernetes-upgrade-153000 | jenkins | v1.34.0 | 16 Sep 24 10:45 PDT | 16 Sep 24 10:45 PDT |
	| start   | -p stopped-upgrade-385000             | minikube                  | jenkins | v1.26.0 | 16 Sep 24 10:45 PDT | 16 Sep 24 10:46 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-385000 stop           | minikube                  | jenkins | v1.26.0 | 16 Sep 24 10:46 PDT | 16 Sep 24 10:46 PDT |
	| start   | -p stopped-upgrade-385000             | stopped-upgrade-385000    | jenkins | v1.34.0 | 16 Sep 24 10:46 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:46:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:46:17.464670    4163 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:46:17.464824    4163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:46:17.464829    4163 out.go:358] Setting ErrFile to fd 2...
	I0916 10:46:17.464833    4163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:46:17.464994    4163 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:46:17.466214    4163 out.go:352] Setting JSON to false
	I0916 10:46:17.486101    4163 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2741,"bootTime":1726506036,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:46:17.486177    4163 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:46:17.490710    4163 out.go:177] * [stopped-upgrade-385000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:46:17.498854    4163 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:46:17.498919    4163 notify.go:220] Checking for updates...
	I0916 10:46:17.505818    4163 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:46:17.508710    4163 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:46:17.511798    4163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:46:17.514827    4163 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:46:17.516127    4163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:46:17.519145    4163 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:46:17.522802    4163 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 10:46:17.525802    4163 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:46:17.529790    4163 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:46:17.536810    4163 start.go:297] selected driver: qemu2
	I0916 10:46:17.536817    4163 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 10:46:17.536897    4163 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:46:17.539882    4163 cni.go:84] Creating CNI manager for ""
	I0916 10:46:17.539915    4163 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:46:17.539943    4163 start.go:340] cluster config:
	{Name:stopped-upgrade-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 10:46:17.540005    4163 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:46:17.547822    4163 out.go:177] * Starting "stopped-upgrade-385000" primary control-plane node in "stopped-upgrade-385000" cluster
	I0916 10:46:17.551705    4163 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0916 10:46:17.551726    4163 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0916 10:46:17.551739    4163 cache.go:56] Caching tarball of preloaded images
	I0916 10:46:17.551804    4163 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:46:17.551810    4163 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0916 10:46:17.551857    4163 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/config.json ...
	I0916 10:46:17.552223    4163 start.go:360] acquireMachinesLock for stopped-upgrade-385000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:46:17.552251    4163 start.go:364] duration metric: took 22.167µs to acquireMachinesLock for "stopped-upgrade-385000"
	I0916 10:46:17.552259    4163 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:46:17.552265    4163 fix.go:54] fixHost starting: 
	I0916 10:46:17.552374    4163 fix.go:112] recreateIfNeeded on stopped-upgrade-385000: state=Stopped err=<nil>
	W0916 10:46:17.552382    4163 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:46:17.560784    4163 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-385000" ...
	I0916 10:46:14.365280    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:17.564749    4163 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:46:17.564823    4163 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50487-:22,hostfwd=tcp::50488-:2376,hostname=stopped-upgrade-385000 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/disk.qcow2
	I0916 10:46:17.614309    4163 main.go:141] libmachine: STDOUT: 
	I0916 10:46:17.614335    4163 main.go:141] libmachine: STDERR: 
	I0916 10:46:17.614341    4163 main.go:141] libmachine: Waiting for VM to start (ssh -p 50487 docker@127.0.0.1)...
	I0916 10:46:19.367616    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:19.368226    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:19.407304    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:19.407478    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:19.435988    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:19.436100    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:19.451383    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:19.451478    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:19.463210    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:19.463300    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:19.476437    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:19.476523    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:19.492947    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:19.493028    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:19.503174    4019 logs.go:276] 0 containers: []
	W0916 10:46:19.503189    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:19.503252    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:19.514136    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:19.514153    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:19.514159    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:19.550533    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:19.550546    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:19.565254    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:19.565265    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:19.578969    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:19.578984    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:19.590304    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:19.590316    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:19.601546    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:19.601560    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:19.612693    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:19.612702    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:19.623848    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:19.623860    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:19.628048    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:19.628057    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:46:19.639122    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:19.639132    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:19.650075    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:19.650087    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:19.688926    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:19.688934    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:19.709872    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:19.709886    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:19.725340    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:19.725352    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:19.737330    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:19.737343    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:19.748845    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:19.748855    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:19.773273    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:19.773284    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:22.292253    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:27.294677    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:27.294944    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:27.308938    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:27.309027    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:27.320029    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:27.320108    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:27.330825    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:27.330907    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:27.341308    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:27.341392    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:27.351866    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:27.351935    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:27.362258    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:27.362328    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:27.373127    4019 logs.go:276] 0 containers: []
	W0916 10:46:27.373140    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:27.373218    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:27.383791    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:27.383807    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:27.383814    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:27.395056    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:27.395067    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:27.407251    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:27.407262    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:27.418788    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:27.418800    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:27.429951    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:27.429963    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:27.456312    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:27.456320    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:27.498819    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:27.498831    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:27.503449    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:27.503459    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:27.515009    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:27.515027    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:27.527009    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:27.527020    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:46:27.544339    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:27.544349    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:27.561700    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:27.561712    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:27.597994    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:27.598004    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:27.616208    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:27.616219    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:27.627865    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:27.627878    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:27.643123    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:27.643133    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:27.660751    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:27.660760    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:30.174225    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:35.176508    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:35.177189    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:35.218387    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:35.218566    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:35.239866    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:35.239992    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:35.254357    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:35.254437    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:35.266202    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:35.266284    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:35.276921    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:35.277008    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:35.287099    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:35.287171    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:35.297103    4019 logs.go:276] 0 containers: []
	W0916 10:46:35.297116    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:35.297177    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:35.307307    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:35.307324    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:35.307330    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:35.321115    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:35.321127    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:35.332349    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:35.332359    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:35.343270    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:35.343282    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:35.368359    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:35.368368    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:35.373002    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:35.373012    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:35.384229    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:35.384240    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:35.395946    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:35.395956    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:35.412834    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:35.412843    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:35.425720    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:35.425738    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:35.465966    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:35.465974    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:35.499609    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:35.499621    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:35.510720    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:35.510732    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:35.529178    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:35.529190    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:35.551234    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:35.551248    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:35.567155    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:35.567165    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:35.578873    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:35.578883    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:46:38.090060    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:37.691375    4163 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/config.json ...
	I0916 10:46:37.691740    4163 machine.go:93] provisionDockerMachine start ...
	I0916 10:46:37.691830    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:37.692035    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:37.692046    4163 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:46:37.768242    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0916 10:46:37.768262    4163 buildroot.go:166] provisioning hostname "stopped-upgrade-385000"
	I0916 10:46:37.768364    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:37.768551    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:37.768564    4163 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-385000 && echo "stopped-upgrade-385000" | sudo tee /etc/hostname
	I0916 10:46:37.845784    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-385000
	
	I0916 10:46:37.845854    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:37.845983    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:37.845994    4163 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-385000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-385000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-385000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:46:37.915269    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:46:37.915282    4163 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19649-964/.minikube CaCertPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19649-964/.minikube}
	I0916 10:46:37.915291    4163 buildroot.go:174] setting up certificates
	I0916 10:46:37.915300    4163 provision.go:84] configureAuth start
	I0916 10:46:37.915306    4163 provision.go:143] copyHostCerts
	I0916 10:46:37.915380    4163 exec_runner.go:144] found /Users/jenkins/minikube-integration/19649-964/.minikube/ca.pem, removing ...
	I0916 10:46:37.915387    4163 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19649-964/.minikube/ca.pem
	I0916 10:46:37.915492    4163 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19649-964/.minikube/ca.pem (1082 bytes)
	I0916 10:46:37.915686    4163 exec_runner.go:144] found /Users/jenkins/minikube-integration/19649-964/.minikube/cert.pem, removing ...
	I0916 10:46:37.915690    4163 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19649-964/.minikube/cert.pem
	I0916 10:46:37.915732    4163 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19649-964/.minikube/cert.pem (1123 bytes)
	I0916 10:46:37.915837    4163 exec_runner.go:144] found /Users/jenkins/minikube-integration/19649-964/.minikube/key.pem, removing ...
	I0916 10:46:37.915840    4163 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19649-964/.minikube/key.pem
	I0916 10:46:37.915887    4163 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19649-964/.minikube/key.pem (1679 bytes)
	I0916 10:46:37.916002    4163 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19649-964/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-385000 san=[127.0.0.1 localhost minikube stopped-upgrade-385000]
	I0916 10:46:38.056167    4163 provision.go:177] copyRemoteCerts
	I0916 10:46:38.056216    4163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:46:38.056226    4163 sshutil.go:53] new ssh client: &{IP:localhost Port:50487 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/id_rsa Username:docker}
	I0916 10:46:38.093208    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0916 10:46:38.099983    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:46:38.106598    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:46:38.114089    4163 provision.go:87] duration metric: took 198.787667ms to configureAuth
	I0916 10:46:38.114102    4163 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:46:38.114212    4163 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:46:38.114252    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:38.114337    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:38.114344    4163 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 10:46:38.184111    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 10:46:38.184125    4163 buildroot.go:70] root file system type: tmpfs
	I0916 10:46:38.184175    4163 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 10:46:38.184224    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:38.184332    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:38.184373    4163 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 10:46:38.259051    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 10:46:38.259124    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:38.259245    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:38.259254    4163 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 10:46:38.622653    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 10:46:38.622668    4163 machine.go:96] duration metric: took 930.955333ms to provisionDockerMachine
	I0916 10:46:38.622674    4163 start.go:293] postStartSetup for "stopped-upgrade-385000" (driver="qemu2")
	I0916 10:46:38.622681    4163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:46:38.622757    4163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:46:38.622769    4163 sshutil.go:53] new ssh client: &{IP:localhost Port:50487 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/id_rsa Username:docker}
	I0916 10:46:38.658258    4163 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:46:38.659627    4163 info.go:137] Remote host: Buildroot 2021.02.12
	I0916 10:46:38.659634    4163 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19649-964/.minikube/addons for local assets ...
	I0916 10:46:38.659704    4163 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19649-964/.minikube/files for local assets ...
	I0916 10:46:38.659817    4163 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19649-964/.minikube/files/etc/ssl/certs/14512.pem -> 14512.pem in /etc/ssl/certs
	I0916 10:46:38.659917    4163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:46:38.662443    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/files/etc/ssl/certs/14512.pem --> /etc/ssl/certs/14512.pem (1708 bytes)
	I0916 10:46:38.670180    4163 start.go:296] duration metric: took 47.50175ms for postStartSetup
	I0916 10:46:38.670195    4163 fix.go:56] duration metric: took 21.118897459s for fixHost
	I0916 10:46:38.670234    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:38.670338    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:38.670344    4163 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:46:38.735096    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726508798.356194129
	
	I0916 10:46:38.735104    4163 fix.go:216] guest clock: 1726508798.356194129
	I0916 10:46:38.735108    4163 fix.go:229] Guest: 2024-09-16 10:46:38.356194129 -0700 PDT Remote: 2024-09-16 10:46:38.670197 -0700 PDT m=+21.238914418 (delta=-314.002871ms)
	I0916 10:46:38.735124    4163 fix.go:200] guest clock delta is within tolerance: -314.002871ms
	I0916 10:46:38.735126    4163 start.go:83] releasing machines lock for "stopped-upgrade-385000", held for 21.183839209s
	I0916 10:46:38.735198    4163 ssh_runner.go:195] Run: cat /version.json
	I0916 10:46:38.735211    4163 sshutil.go:53] new ssh client: &{IP:localhost Port:50487 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/id_rsa Username:docker}
	I0916 10:46:38.735382    4163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:46:38.735401    4163 sshutil.go:53] new ssh client: &{IP:localhost Port:50487 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/id_rsa Username:docker}
	W0916 10:46:38.735828    4163 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50487: connect: connection refused
	I0916 10:46:38.735848    4163 retry.go:31] will retry after 267.174656ms: dial tcp [::1]:50487: connect: connection refused
	W0916 10:46:38.768150    4163 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0916 10:46:38.768200    4163 ssh_runner.go:195] Run: systemctl --version
	I0916 10:46:38.769879    4163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:46:38.771542    4163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:46:38.771574    4163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0916 10:46:38.774285    4163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0916 10:46:38.779147    4163 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:46:38.779178    4163 start.go:495] detecting cgroup driver to use...
	I0916 10:46:38.779256    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:46:38.786444    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0916 10:46:38.790046    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:46:38.793417    4163 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:46:38.793449    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:46:38.796405    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:46:38.799249    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:46:38.802501    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:46:38.805828    4163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:46:38.808892    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:46:38.811661    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:46:38.814586    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:46:38.817900    4163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:46:38.820687    4163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:46:38.823210    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:38.903066    4163 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:46:38.909003    4163 start.go:495] detecting cgroup driver to use...
	I0916 10:46:38.909060    4163 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 10:46:38.915442    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:46:38.921091    4163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:46:38.927254    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:46:38.932373    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:46:38.936787    4163 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:46:38.985941    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:46:38.991040    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:46:38.997883    4163 ssh_runner.go:195] Run: which cri-dockerd
	I0916 10:46:38.999275    4163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:46:39.002249    4163 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0916 10:46:39.007320    4163 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 10:46:39.091226    4163 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 10:46:39.168285    4163 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:46:39.168360    4163 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:46:39.173721    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:39.254281    4163 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 10:46:40.408997    4163 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15474225s)
	I0916 10:46:40.409068    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:46:40.413821    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:46:40.417940    4163 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:46:40.497679    4163 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 10:46:40.577714    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:40.658194    4163 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 10:46:40.664199    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:46:40.668313    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:40.746541    4163 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 10:46:40.783820    4163 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:46:40.783915    4163 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 10:46:40.786440    4163 start.go:563] Will wait 60s for crictl version
	I0916 10:46:40.786493    4163 ssh_runner.go:195] Run: which crictl
	I0916 10:46:40.787969    4163 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:46:40.802441    4163 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0916 10:46:40.802530    4163 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 10:46:40.818561    4163 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 10:46:40.837806    4163 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0916 10:46:40.837892    4163 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0916 10:46:40.839195    4163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:46:40.842812    4163 kubeadm.go:883] updating cluster {Name:stopped-upgrade-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0916 10:46:40.842867    4163 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0916 10:46:40.842919    4163 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 10:46:40.852864    4163 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 10:46:40.852878    4163 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0916 10:46:40.852932    4163 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 10:46:40.856050    4163 ssh_runner.go:195] Run: which lz4
	I0916 10:46:40.857408    4163 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:46:40.858847    4163 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:46:40.858858    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0916 10:46:41.756068    4163 docker.go:649] duration metric: took 898.733166ms to copy over tarball
	I0916 10:46:41.756140    4163 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:46:43.092041    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:43.092211    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:43.104409    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:43.104496    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:43.115076    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:43.115164    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:43.125250    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:43.125338    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:43.135667    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:43.135751    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:43.146279    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:43.146359    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:43.156663    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:43.156742    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:43.167106    4019 logs.go:276] 0 containers: []
	W0916 10:46:43.167117    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:43.167188    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:43.178247    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:43.178264    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:43.178270    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:43.189735    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:43.189747    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:43.227400    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:43.227411    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:43.241486    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:43.241496    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:43.252854    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:43.252869    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:43.266299    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:43.266310    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:43.278139    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:43.278154    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:43.295444    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:43.295454    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:43.306560    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:43.306574    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:43.318187    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:43.318202    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:43.329250    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:43.329263    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:43.369076    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:43.369083    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:43.373327    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:43.373337    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:43.385306    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:43.385316    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:46:43.396902    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:43.396915    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:43.420047    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:43.420054    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:43.432042    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:43.432054    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:42.911591    4163 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.155478625s)
	I0916 10:46:42.911604    4163 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:46:42.927194    4163 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 10:46:42.930574    4163 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0916 10:46:42.936082    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:43.013483    4163 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 10:46:44.472774    4163 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.459325042s)
	I0916 10:46:44.472879    4163 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 10:46:44.485784    4163 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 10:46:44.485792    4163 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0916 10:46:44.485797    4163 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 10:46:44.489697    4163 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:46:44.492456    4163 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:46:44.495123    4163 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:46:44.495948    4163 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:46:44.497863    4163 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:46:44.498026    4163 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:46:44.499702    4163 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:46:44.499850    4163 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:46:44.500851    4163 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0916 10:46:44.501135    4163 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:46:44.502111    4163 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0916 10:46:44.502111    4163 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:46:44.503171    4163 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:46:44.503289    4163 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0916 10:46:44.504455    4163 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0916 10:46:44.505075    4163 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:46:44.935446    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:46:44.939782    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:46:44.946407    4163 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0916 10:46:44.946435    4163 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:46:44.946508    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:46:44.953955    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:46:44.960387    4163 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0916 10:46:44.960405    4163 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:46:44.960457    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:46:44.963724    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:46:44.970101    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0916 10:46:44.971019    4163 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0916 10:46:44.971036    4163 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:46:44.971101    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:46:44.975381    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0916 10:46:44.976382    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0916 10:46:44.986592    4163 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0916 10:46:44.986615    4163 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:46:44.986675    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:46:44.987602    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0916 10:46:44.992475    4163 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0916 10:46:44.992492    4163 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0916 10:46:44.992537    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0916 10:46:45.002865    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0916 10:46:45.006440    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0916 10:46:45.011271    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0916 10:46:45.011394    4163 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0916 10:46:45.016820    4163 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0916 10:46:45.016842    4163 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0916 10:46:45.016827    4163 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0916 10:46:45.016865    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0916 10:46:45.016896    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0916 10:46:45.023828    4163 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0916 10:46:45.023838    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0916 10:46:45.038751    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0916 10:46:45.052973    4163 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0916 10:46:45.055501    4163 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0916 10:46:45.055644    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:46:45.065140    4163 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0916 10:46:45.065165    4163 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:46:45.065224    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:46:45.074996    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0916 10:46:45.075123    4163 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0916 10:46:45.076685    4163 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0916 10:46:45.076695    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0916 10:46:45.114968    4163 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0916 10:46:45.114980    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0916 10:46:45.151705    4163 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0916 10:46:45.340243    4163 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0916 10:46:45.340554    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:46:45.358840    4163 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0916 10:46:45.358871    4163 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:46:45.358966    4163 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:46:45.376429    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 10:46:45.376587    4163 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 10:46:45.378175    4163 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 10:46:45.378189    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0916 10:46:45.410812    4163 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 10:46:45.410826    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0916 10:46:45.642283    4163 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 10:46:45.642324    4163 cache_images.go:92] duration metric: took 1.156561458s to LoadCachedImages
	W0916 10:46:45.642370    4163 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0916 10:46:45.642376    4163 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0916 10:46:45.642425    4163 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-385000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:46:45.642507    4163 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 10:46:45.657216    4163 cni.go:84] Creating CNI manager for ""
	I0916 10:46:45.657228    4163 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:46:45.657233    4163 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:46:45.657242    4163 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-385000 NodeName:stopped-upgrade-385000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:46:45.657319    4163 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-385000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:46:45.657378    4163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0916 10:46:45.660472    4163 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:46:45.660505    4163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:46:45.662966    4163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0916 10:46:45.667962    4163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:46:45.672649    4163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0916 10:46:45.678346    4163 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0916 10:46:45.679646    4163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:46:45.683051    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:45.752711    4163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:46:45.758701    4163 certs.go:68] Setting up /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000 for IP: 10.0.2.15
	I0916 10:46:45.758712    4163 certs.go:194] generating shared ca certs ...
	I0916 10:46:45.758721    4163 certs.go:226] acquiring lock for ca certs: {Name:mk95bad6e61a22ab8ae5ec5f8cd43ca9ad7a3f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:46:45.758874    4163 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19649-964/.minikube/ca.key
	I0916 10:46:45.758911    4163 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.key
	I0916 10:46:45.758920    4163 certs.go:256] generating profile certs ...
	I0916 10:46:45.758978    4163 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/client.key
	I0916 10:46:45.758993    4163 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.key.a125086a
	I0916 10:46:45.759002    4163 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.crt.a125086a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0916 10:46:45.796891    4163 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.crt.a125086a ...
	I0916 10:46:45.796905    4163 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.crt.a125086a: {Name:mk7cf1853e70135d80fe55d14110a29e8f3c472c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:46:45.797703    4163 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.key.a125086a ...
	I0916 10:46:45.797708    4163 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.key.a125086a: {Name:mkfcacca423acee63f1eaba2b7a073b3c1e7f477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:46:45.797870    4163 certs.go:381] copying /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.crt.a125086a -> /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.crt
	I0916 10:46:45.798032    4163 certs.go:385] copying /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.key.a125086a -> /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.key
	I0916 10:46:45.798171    4163 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/proxy-client.key
	I0916 10:46:45.798304    4163 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/1451.pem (1338 bytes)
	W0916 10:46:45.798334    4163 certs.go:480] ignoring /Users/jenkins/minikube-integration/19649-964/.minikube/certs/1451_empty.pem, impossibly tiny 0 bytes
	I0916 10:46:45.798339    4163 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:46:45.798361    4163 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:46:45.798380    4163 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:46:45.798397    4163 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/key.pem (1679 bytes)
	I0916 10:46:45.798661    4163 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/files/etc/ssl/certs/14512.pem (1708 bytes)
	I0916 10:46:45.798990    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:46:45.807636    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:46:45.814563    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:46:45.822073    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 10:46:45.829320    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 10:46:45.836392    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:46:45.842960    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:46:45.849902    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:46:45.857361    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/certs/1451.pem --> /usr/share/ca-certificates/1451.pem (1338 bytes)
	I0916 10:46:45.863802    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/files/etc/ssl/certs/14512.pem --> /usr/share/ca-certificates/14512.pem (1708 bytes)
	I0916 10:46:45.870469    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:46:45.877431    4163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:46:45.882827    4163 ssh_runner.go:195] Run: openssl version
	I0916 10:46:45.884720    4163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:46:45.887424    4163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:46:45.888825    4163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:46:45.888845    4163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:46:45.890483    4163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:46:45.893793    4163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1451.pem && ln -fs /usr/share/ca-certificates/1451.pem /etc/ssl/certs/1451.pem"
	I0916 10:46:45.897008    4163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1451.pem
	I0916 10:46:45.898470    4163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 17:19 /usr/share/ca-certificates/1451.pem
	I0916 10:46:45.898490    4163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1451.pem
	I0916 10:46:45.900174    4163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1451.pem /etc/ssl/certs/51391683.0"
	I0916 10:46:45.902833    4163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14512.pem && ln -fs /usr/share/ca-certificates/14512.pem /etc/ssl/certs/14512.pem"
	I0916 10:46:45.906325    4163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14512.pem
	I0916 10:46:45.907719    4163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 17:19 /usr/share/ca-certificates/14512.pem
	I0916 10:46:45.907740    4163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14512.pem
	I0916 10:46:45.909294    4163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14512.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:46:45.912741    4163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:46:45.914085    4163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:46:45.915980    4163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:46:45.917649    4163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:46:45.919572    4163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:46:45.921221    4163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:46:45.922971    4163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:46:45.924774    4163 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 10:46:45.924849    4163 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:46:45.935303    4163 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:46:45.938569    4163 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:46:45.938574    4163 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:46:45.938599    4163 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:46:45.941539    4163 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:46:45.941832    4163 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-385000" does not appear in /Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:46:45.941941    4163 kubeconfig.go:62] /Users/jenkins/minikube-integration/19649-964/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-385000" cluster setting kubeconfig missing "stopped-upgrade-385000" context setting]
	I0916 10:46:45.942128    4163 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/kubeconfig: {Name:mk3766c19461825f7de68cf1dc4ddceadf57e288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:46:45.942595    4163 kapi.go:59] client config for stopped-upgrade-385000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/client.key", CAFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104389800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:46:45.942912    4163 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:46:45.946038    4163 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-385000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0916 10:46:45.946046    4163 kubeadm.go:1160] stopping kube-system containers ...
	I0916 10:46:45.946095    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:46:45.956823    4163 docker.go:483] Stopping containers: [8d9f55826a97 8d4d0ab15021 0b4e9b314038 bc2f80890fd2 260c90f3d5ef 24a3271025cd 7c61046fb44a bd11f23a2766]
	I0916 10:46:45.956900    4163 ssh_runner.go:195] Run: docker stop 8d9f55826a97 8d4d0ab15021 0b4e9b314038 bc2f80890fd2 260c90f3d5ef 24a3271025cd 7c61046fb44a bd11f23a2766
	I0916 10:46:45.969747    4163 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 10:46:45.975984    4163 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:46:45.978756    4163 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:46:45.978761    4163 kubeadm.go:157] found existing configuration files:
	
	I0916 10:46:45.978787    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf
	I0916 10:46:45.981862    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:46:45.981888    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:46:45.984733    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf
	I0916 10:46:45.987066    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:46:45.987091    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:46:45.989955    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf
	I0916 10:46:45.993090    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:46:45.993115    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:46:45.995723    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf
	I0916 10:46:45.998355    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:46:45.998383    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:46:46.001360    4163 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:46:46.004149    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:46:46.027615    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:46:46.317453    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:46:46.453742    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:46:46.480738    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:46:46.499848    4163 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:46:46.499923    4163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:47.000909    4163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:45.945780    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:47.501939    4163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:47.506170    4163 api_server.go:72] duration metric: took 1.006358125s to wait for apiserver process to appear ...
	I0916 10:46:47.506182    4163 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:46:47.506192    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:50.947809    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:50.948169    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:50.975854    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:50.976004    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:50.993417    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:50.993515    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:51.006498    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:51.006592    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:51.018442    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:51.018532    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:51.029361    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:51.029446    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:51.039966    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:51.040040    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:51.050678    4019 logs.go:276] 0 containers: []
	W0916 10:46:51.050693    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:51.050777    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:51.061114    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:51.061132    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:51.061139    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:51.098350    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:51.098364    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:51.111376    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:51.111389    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:51.123682    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:51.123697    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:51.135776    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:51.135789    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:51.153534    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:51.153552    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:51.164676    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:51.164689    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:46:51.175846    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:51.175860    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:51.214640    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:51.214649    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:51.227960    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:51.227973    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:51.253792    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:51.253814    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:51.268332    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:51.268350    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:51.280622    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:51.280635    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:51.291960    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:51.291970    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:51.306398    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:51.306414    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:51.317876    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:51.317888    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:51.329793    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:51.329808    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:52.508076    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:52.508098    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:53.836580    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:57.508217    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:57.508262    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:58.838653    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:58.838765    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:46:58.849743    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:46:58.849835    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:46:58.860646    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:46:58.860741    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:46:58.871432    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:46:58.871509    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:46:58.882592    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:46:58.882679    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:46:58.893613    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:46:58.893699    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:46:58.908133    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:46:58.908213    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:46:58.917829    4019 logs.go:276] 0 containers: []
	W0916 10:46:58.917839    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:46:58.917912    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:46:58.928259    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:46:58.928276    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:46:58.928282    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:46:58.942535    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:46:58.942546    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:46:58.966776    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:46:58.966789    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:46:58.985655    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:46:58.985666    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:46:58.997257    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:46:58.997278    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:46:59.014782    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:46:59.014796    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:46:59.027732    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:46:59.027747    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:46:59.049962    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:46:59.049969    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:46:59.061140    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:46:59.061150    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:46:59.072497    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:46:59.072508    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:46:59.083657    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:46:59.083669    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:46:59.094958    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:46:59.094970    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:46:59.106103    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:46:59.106114    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:46:59.118002    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:46:59.118013    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:46:59.157866    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:46:59.157874    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:46:59.162419    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:46:59.162427    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:46:59.196843    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:46:59.196853    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:47:01.709134    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:02.508566    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:02.508595    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:06.711191    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:06.711329    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:47:06.723988    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:47:06.724088    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:47:06.735055    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:47:06.735151    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:47:06.746039    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:47:06.746126    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:47:06.756737    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:47:06.756839    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:47:06.775992    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:47:06.776076    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:47:06.787077    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:47:06.787159    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:47:06.813202    4019 logs.go:276] 0 containers: []
	W0916 10:47:06.813218    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:47:06.813298    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:47:06.833101    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:47:06.833125    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:47:06.833133    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:47:06.855007    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:47:06.855018    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:47:06.866476    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:47:06.866491    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:47:06.880699    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:47:06.880710    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:47:06.892292    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:47:06.892301    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:47:06.904946    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:47:06.904960    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:47:06.916457    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:47:06.916468    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:47:06.939637    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:47:06.939648    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:47:06.951394    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:47:06.951406    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:47:06.989090    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:47:06.989106    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:47:07.007844    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:47:07.007853    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:47:07.025310    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:47:07.025322    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:47:07.037907    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:47:07.037920    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:47:07.049188    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:47:07.049199    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:47:07.060054    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:47:07.060068    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:47:07.098754    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:47:07.098764    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:47:07.103459    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:47:07.103468    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:47:07.509157    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:07.509211    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:09.617375    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:12.509949    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:12.510042    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:14.619889    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:14.620136    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:47:14.643917    4019 logs.go:276] 2 containers: [3ed66d8b99fa 0a8b24166bba]
	I0916 10:47:14.644042    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:47:14.659315    4019 logs.go:276] 2 containers: [889fc5742d5f 301c85c8c62e]
	I0916 10:47:14.659393    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:47:14.671532    4019 logs.go:276] 1 containers: [3e573cfb24b8]
	I0916 10:47:14.671602    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:47:14.683072    4019 logs.go:276] 2 containers: [11e2cfd754d8 4021b939b431]
	I0916 10:47:14.683156    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:47:14.693389    4019 logs.go:276] 1 containers: [43b9656f3ae0]
	I0916 10:47:14.693465    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:47:14.704231    4019 logs.go:276] 2 containers: [1c2cfd030c51 c6c02703a52b]
	I0916 10:47:14.704302    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:47:14.717026    4019 logs.go:276] 0 containers: []
	W0916 10:47:14.717039    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:47:14.717104    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:47:14.727992    4019 logs.go:276] 2 containers: [97b9ab80816a cdf81d4918eb]
	I0916 10:47:14.728010    4019 logs.go:123] Gathering logs for kube-apiserver [3ed66d8b99fa] ...
	I0916 10:47:14.728015    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed66d8b99fa"
	I0916 10:47:14.742137    4019 logs.go:123] Gathering logs for kube-scheduler [4021b939b431] ...
	I0916 10:47:14.742147    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4021b939b431"
	I0916 10:47:14.754529    4019 logs.go:123] Gathering logs for kube-proxy [43b9656f3ae0] ...
	I0916 10:47:14.754541    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9656f3ae0"
	I0916 10:47:14.766235    4019 logs.go:123] Gathering logs for storage-provisioner [cdf81d4918eb] ...
	I0916 10:47:14.766248    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdf81d4918eb"
	I0916 10:47:14.777861    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:47:14.777873    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:47:14.782266    4019 logs.go:123] Gathering logs for kube-apiserver [0a8b24166bba] ...
	I0916 10:47:14.782277    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a8b24166bba"
	I0916 10:47:14.793476    4019 logs.go:123] Gathering logs for kube-scheduler [11e2cfd754d8] ...
	I0916 10:47:14.793489    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11e2cfd754d8"
	I0916 10:47:14.805596    4019 logs.go:123] Gathering logs for kube-controller-manager [c6c02703a52b] ...
	I0916 10:47:14.805605    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6c02703a52b"
	I0916 10:47:14.816868    4019 logs.go:123] Gathering logs for storage-provisioner [97b9ab80816a] ...
	I0916 10:47:14.816881    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b9ab80816a"
	I0916 10:47:14.830200    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:47:14.830210    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:47:14.842190    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:47:14.842200    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:47:14.880024    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:47:14.880030    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:47:14.915532    4019 logs.go:123] Gathering logs for etcd [889fc5742d5f] ...
	I0916 10:47:14.915544    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 889fc5742d5f"
	I0916 10:47:14.929743    4019 logs.go:123] Gathering logs for etcd [301c85c8c62e] ...
	I0916 10:47:14.929752    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 301c85c8c62e"
	I0916 10:47:14.941025    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:47:14.941037    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:47:14.964835    4019 logs.go:123] Gathering logs for coredns [3e573cfb24b8] ...
	I0916 10:47:14.964841    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e573cfb24b8"
	I0916 10:47:14.983623    4019 logs.go:123] Gathering logs for kube-controller-manager [1c2cfd030c51] ...
	I0916 10:47:14.983632    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2cfd030c51"
	I0916 10:47:17.502860    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:17.511130    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:17.511187    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:22.505522    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:22.505615    4019 kubeadm.go:597] duration metric: took 4m4.83813525s to restartPrimaryControlPlane
	W0916 10:47:22.505689    4019 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0916 10:47:22.505732    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0916 10:47:23.411457    4019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:47:23.416432    4019 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:47:23.419173    4019 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:47:23.422085    4019 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:47:23.422091    4019 kubeadm.go:157] found existing configuration files:
	
	I0916 10:47:23.422124    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/admin.conf
	I0916 10:47:23.424750    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:47:23.424785    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:47:23.427834    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/kubelet.conf
	I0916 10:47:23.430608    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:47:23.430639    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:47:23.433298    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/controller-manager.conf
	I0916 10:47:23.436472    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:47:23.436497    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:47:23.439830    4019 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/scheduler.conf
	I0916 10:47:23.442644    4019 kubeadm.go:163] "https://control-plane.minikube.internal:50291" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50291 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:47:23.442669    4019 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:47:23.445060    4019 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:47:23.462156    4019 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0916 10:47:23.462190    4019 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:47:23.513446    4019 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:47:23.513527    4019 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:47:23.513586    4019 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 10:47:23.566716    4019 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:47:23.574876    4019 out.go:235]   - Generating certificates and keys ...
	I0916 10:47:23.574910    4019 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:47:23.574940    4019 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:47:23.574990    4019 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0916 10:47:23.575020    4019 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0916 10:47:23.575059    4019 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0916 10:47:23.575088    4019 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0916 10:47:23.575128    4019 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0916 10:47:23.575161    4019 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0916 10:47:23.575216    4019 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0916 10:47:23.575250    4019 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0916 10:47:23.575269    4019 kubeadm.go:310] [certs] Using the existing "sa" key
	I0916 10:47:23.575299    4019 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:47:23.626726    4019 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:47:23.704362    4019 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:47:23.891584    4019 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:47:23.966274    4019 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:47:23.994170    4019 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:47:23.994598    4019 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:47:23.994619    4019 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:47:24.081817    4019 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:47:22.512606    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:22.512639    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:24.088932    4019 out.go:235]   - Booting up control plane ...
	I0916 10:47:24.088985    4019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:47:24.089021    4019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:47:24.089088    4019 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:47:24.089126    4019 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:47:24.089206    4019 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 10:47:28.589093    4019 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504560 seconds
	I0916 10:47:28.589185    4019 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:47:28.596500    4019 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:47:29.107598    4019 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:47:29.107768    4019 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-707000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:47:29.611473    4019 kubeadm.go:310] [bootstrap-token] Using token: me8yh3.v7xmqm9syeoc3hay
	I0916 10:47:29.617847    4019 out.go:235]   - Configuring RBAC rules ...
	I0916 10:47:29.617907    4019 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:47:29.617961    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:47:29.622495    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:47:29.623410    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:47:29.624266    4019 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:47:29.625114    4019 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:47:29.628226    4019 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:47:29.804831    4019 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:47:30.015465    4019 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:47:30.016122    4019 kubeadm.go:310] 
	I0916 10:47:30.016159    4019 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:47:30.016162    4019 kubeadm.go:310] 
	I0916 10:47:30.016239    4019 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:47:30.016247    4019 kubeadm.go:310] 
	I0916 10:47:30.016262    4019 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:47:30.016290    4019 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:47:30.016314    4019 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:47:30.016317    4019 kubeadm.go:310] 
	I0916 10:47:30.016343    4019 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:47:30.016347    4019 kubeadm.go:310] 
	I0916 10:47:30.016384    4019 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:47:30.016388    4019 kubeadm.go:310] 
	I0916 10:47:30.016442    4019 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:47:30.016508    4019 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:47:30.016570    4019 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:47:30.016577    4019 kubeadm.go:310] 
	I0916 10:47:30.016652    4019 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:47:30.016694    4019 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:47:30.016712    4019 kubeadm.go:310] 
	I0916 10:47:30.016752    4019 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token me8yh3.v7xmqm9syeoc3hay \
	I0916 10:47:30.016801    4019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4cbf98c9db407bfd377513d8a979980a7165b5a1a5b1a669b5a690e8302fdda \
	I0916 10:47:30.016834    4019 kubeadm.go:310] 	--control-plane 
	I0916 10:47:30.016840    4019 kubeadm.go:310] 
	I0916 10:47:30.016887    4019 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:47:30.016891    4019 kubeadm.go:310] 
	I0916 10:47:30.016930    4019 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token me8yh3.v7xmqm9syeoc3hay \
	I0916 10:47:30.017011    4019 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4cbf98c9db407bfd377513d8a979980a7165b5a1a5b1a669b5a690e8302fdda 
	I0916 10:47:30.017069    4019 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:47:30.017077    4019 cni.go:84] Creating CNI manager for ""
	I0916 10:47:30.017085    4019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:47:30.020695    4019 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:47:30.027706    4019 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:47:30.032351    4019 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:47:30.037238    4019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:47:30.037291    4019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:47:30.037309    4019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-707000 minikube.k8s.io/updated_at=2024_09_16T10_47_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=running-upgrade-707000 minikube.k8s.io/primary=true
	I0916 10:47:30.075582    4019 ops.go:34] apiserver oom_adj: -16
	I0916 10:47:30.075592    4019 kubeadm.go:1113] duration metric: took 38.3485ms to wait for elevateKubeSystemPrivileges
	I0916 10:47:30.075601    4019 kubeadm.go:394] duration metric: took 4m12.438076083s to StartCluster
	I0916 10:47:30.075612    4019 settings.go:142] acquiring lock: {Name:mkcc144e0c413dd8611ee3ccbc8c08f02650f2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:47:30.075701    4019 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:47:30.076141    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/kubeconfig: {Name:mk3766c19461825f7de68cf1dc4ddceadf57e288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:47:30.076346    4019 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:47:30.076392    4019 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:47:30.076429    4019 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-707000"
	I0916 10:47:30.076436    4019 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-707000"
	W0916 10:47:30.076440    4019 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:47:30.076441    4019 config.go:182] Loaded profile config "running-upgrade-707000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:47:30.076449    4019 host.go:66] Checking if "running-upgrade-707000" exists ...
	I0916 10:47:30.076475    4019 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-707000"
	I0916 10:47:30.076484    4019 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-707000"
	I0916 10:47:30.076709    4019 retry.go:31] will retry after 855.309173ms: connect: dial unix /Users/jenkins/minikube-integration/19649-964/.minikube/machines/running-upgrade-707000/monitor: connect: connection refused
	I0916 10:47:30.077432    4019 kapi.go:59] client config for running-upgrade-707000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/running-upgrade-707000/client.key", CAFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10285d800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:47:30.077549    4019 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-707000"
	W0916 10:47:30.077554    4019 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:47:30.077561    4019 host.go:66] Checking if "running-upgrade-707000" exists ...
	I0916 10:47:30.078081    4019 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:47:30.078087    4019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:47:30.078092    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/running-upgrade-707000/id_rsa Username:docker}
	I0916 10:47:30.079763    4019 out.go:177] * Verifying Kubernetes components...
	I0916 10:47:30.087756    4019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:47:30.183782    4019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:47:30.188902    4019 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:47:30.188953    4019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:47:30.193241    4019 api_server.go:72] duration metric: took 116.88875ms to wait for apiserver process to appear ...
	I0916 10:47:30.193248    4019 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:47:30.193254    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:30.248882    4019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:47:30.534839    4019 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:47:30.534851    4019 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:47:30.939712    4019 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:47:27.514403    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:27.514442    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:30.943701    4019 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:47:30.943707    4019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:47:30.943716    4019 sshutil.go:53] new ssh client: &{IP:localhost Port:50259 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/running-upgrade-707000/id_rsa Username:docker}
	I0916 10:47:30.983321    4019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:47:32.516530    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:32.516558    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:35.194385    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:35.194423    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:37.518605    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:37.518634    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:40.195026    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:40.195063    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:42.520745    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:42.520763    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:45.195580    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:45.195601    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:47.522797    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:47.523034    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:47:47.545393    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:47:47.545483    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:47:47.555907    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:47:47.555990    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:47:47.565978    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:47:47.566061    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:47:47.576820    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:47:47.576900    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:47:47.587811    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:47:47.587891    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:47:47.598483    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:47:47.598563    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:47:47.608416    4163 logs.go:276] 0 containers: []
	W0916 10:47:47.608432    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:47:47.608513    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:47:47.619128    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:47:47.619145    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:47:47.619151    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:47:47.631203    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:47:47.631213    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:47:47.647166    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:47:47.647177    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:47:47.672444    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:47:47.672451    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:47:47.712290    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:47:47.712299    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:47:47.791867    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:47:47.791878    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:47:47.806228    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:47:47.806240    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:47:47.820783    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:47:47.820794    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:47:47.863306    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:47:47.863325    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:47:47.889155    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:47:47.889168    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:47:47.904300    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:47:47.904318    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:47:47.916285    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:47:47.916297    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:47:47.920579    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:47:47.920586    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:47:47.931888    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:47:47.931901    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:47:47.946332    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:47:47.946346    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:47:47.958556    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:47:47.958568    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:47:47.975887    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:47:47.975898    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:47:50.492976    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:50.195828    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:50.195873    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:55.495080    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:55.495286    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:47:55.514924    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:47:55.515062    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:47:55.529123    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:47:55.529219    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:47:55.541968    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:47:55.542061    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:47:55.552823    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:47:55.552906    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:47:55.564420    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:47:55.564504    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:47:55.575316    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:47:55.575387    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:47:55.585519    4163 logs.go:276] 0 containers: []
	W0916 10:47:55.585533    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:47:55.585608    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:47:55.596020    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:47:55.596039    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:47:55.596044    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:47:55.635457    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:47:55.635467    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:47:55.654499    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:47:55.654509    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:47:55.666987    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:47:55.666997    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:47:55.679239    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:47:55.679248    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:47:55.690890    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:47:55.690901    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:47:55.727510    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:47:55.727519    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:47:55.731734    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:47:55.731747    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:47:55.745223    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:47:55.745231    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:47:55.759484    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:47:55.759495    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:47:55.771431    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:47:55.771441    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:47:55.785899    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:47:55.785909    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:47:55.819910    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:47:55.819919    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:47:55.833659    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:47:55.833669    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:47:55.845401    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:47:55.845411    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:47:55.857804    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:47:55.857816    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:47:55.875697    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:47:55.875708    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:47:55.196275    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:55.196315    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:00.196558    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:00.196585    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0916 10:48:00.535723    4019 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0916 10:48:00.539895    4019 out.go:177] * Enabled addons: storage-provisioner
	I0916 10:47:58.401909    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:00.550885    4019 addons.go:510] duration metric: took 30.475404333s for enable addons: enabled=[storage-provisioner]
	I0916 10:48:03.403619    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:03.404140    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:03.436538    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:03.436691    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:03.459421    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:03.459524    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:03.472172    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:03.472264    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:03.492553    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:03.492658    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:03.510013    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:03.510102    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:03.520135    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:03.520217    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:03.530563    4163 logs.go:276] 0 containers: []
	W0916 10:48:03.530573    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:03.530641    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:03.543234    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:03.543253    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:03.543259    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:03.555989    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:03.556000    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:03.573327    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:03.573337    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:03.585216    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:03.585227    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:03.589938    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:03.589947    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:03.630526    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:03.630539    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:03.645213    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:03.645224    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:03.656811    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:03.656823    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:03.671469    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:03.671481    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:03.687933    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:03.687943    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:03.699734    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:03.699745    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:03.712066    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:03.712076    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:03.749279    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:03.749289    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:03.784628    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:03.784643    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:03.800271    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:03.800282    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:03.812002    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:03.812015    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:03.826460    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:03.826469    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:06.352134    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:05.197204    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:05.197267    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:11.354222    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:11.354376    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:11.368426    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:11.368513    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:11.379636    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:11.379710    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:11.390030    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:11.390105    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:11.400840    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:11.400919    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:11.411618    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:11.411709    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:11.422013    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:11.422098    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:11.432163    4163 logs.go:276] 0 containers: []
	W0916 10:48:11.432173    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:11.432241    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:11.445987    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:11.446004    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:11.446010    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:11.482827    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:11.482835    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:11.504317    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:11.504328    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:11.524484    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:11.524494    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:11.546670    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:11.546680    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:11.562310    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:11.562319    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:11.573635    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:11.573649    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:11.585689    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:11.585703    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:11.590204    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:11.590210    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:11.632480    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:11.632494    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:11.649163    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:11.649174    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:11.688099    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:11.688115    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:11.704053    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:11.704062    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:11.715899    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:11.715910    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:11.729547    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:11.729561    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:11.746797    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:11.746814    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:11.759351    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:11.759365    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:10.198215    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:10.198246    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:14.284442    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:15.199384    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:15.199406    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:19.284790    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:19.284960    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:19.296646    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:19.296747    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:19.308745    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:19.308829    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:19.319345    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:19.319430    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:19.329960    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:19.330051    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:19.340417    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:19.340495    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:19.351156    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:19.351240    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:19.361568    4163 logs.go:276] 0 containers: []
	W0916 10:48:19.361592    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:19.361658    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:19.372401    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:19.372419    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:19.372425    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:19.383695    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:19.383706    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:19.398520    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:19.398530    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:19.410441    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:19.410453    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:19.414731    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:19.414738    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:19.429161    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:19.429173    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:19.440863    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:19.440877    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:19.464067    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:19.464073    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:19.501421    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:19.501432    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:19.514926    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:19.514940    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:19.529354    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:19.529364    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:19.547988    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:19.547998    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:19.583474    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:19.583490    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:19.621990    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:19.622005    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:19.633325    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:19.633336    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:19.651332    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:19.651347    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:19.666783    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:19.666792    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:22.179875    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:20.200628    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:20.200651    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:27.182102    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:27.182392    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:27.207042    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:27.207184    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:27.229271    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:27.229371    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:27.241940    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:27.242026    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:27.252979    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:27.253067    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:27.264478    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:27.264560    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:27.275380    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:27.275470    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:27.285861    4163 logs.go:276] 0 containers: []
	W0916 10:48:27.285872    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:27.285947    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:27.296372    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:27.296392    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:27.296398    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:27.335695    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:27.335704    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:27.346907    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:27.346918    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:27.358258    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:27.358274    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:27.372430    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:27.372440    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:27.383707    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:27.383719    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:27.388343    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:27.388352    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:27.422497    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:27.422508    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:27.436763    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:27.436775    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:27.453551    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:27.453561    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:25.202033    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:25.202109    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:27.465003    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:27.465013    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:27.479165    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:27.479174    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:27.516517    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:27.516530    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:27.530780    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:27.530791    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:27.556547    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:27.556557    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:27.568169    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:27.568181    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:27.587148    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:27.587163    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:30.098779    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:30.204367    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:30.204482    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:30.222153    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:48:30.222246    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:30.238815    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:48:30.238898    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:30.250032    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:48:30.250118    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:30.263547    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:48:30.263633    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:30.274353    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:48:30.274440    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:30.285721    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:48:30.285803    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:30.301592    4019 logs.go:276] 0 containers: []
	W0916 10:48:30.301603    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:30.301669    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:30.315076    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:48:30.315090    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:30.315098    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:30.350969    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:48:30.350977    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:48:30.364913    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:48:30.364923    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:48:30.378848    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:48:30.378859    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:48:30.390889    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:48:30.390900    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:48:30.408604    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:48:30.408614    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:48:30.424503    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:30.424513    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:30.448225    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:48:30.448233    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:30.460547    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:30.460558    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:30.465379    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:30.465387    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:30.503393    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:48:30.503405    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:48:30.515338    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:48:30.515349    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:48:30.534217    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:48:30.534231    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:48:33.048301    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:35.099858    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:35.100463    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:35.140199    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:35.140371    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:35.162441    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:35.162562    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:35.177342    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:35.177441    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:35.190498    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:35.190593    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:35.201251    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:35.201334    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:35.217247    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:35.217327    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:35.227030    4163 logs.go:276] 0 containers: []
	W0916 10:48:35.227048    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:35.227114    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:35.237661    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:35.237679    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:35.237685    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:35.241886    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:35.241892    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:35.253699    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:35.253713    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:35.268463    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:35.268473    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:35.280955    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:35.280971    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:35.320002    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:35.320018    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:35.334973    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:35.334984    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:35.352734    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:35.352744    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:35.375998    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:35.376007    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:35.387288    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:35.387300    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:35.421949    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:35.421962    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:35.436214    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:35.436228    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:35.453698    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:35.453711    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:35.465472    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:35.465483    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:35.489455    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:35.489465    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:35.527700    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:35.527713    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:35.541663    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:35.541680    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:38.050937    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:38.051470    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:38.081171    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:48:38.081330    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:38.098828    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:48:38.098934    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:38.112555    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:48:38.112641    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:38.124094    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:48:38.124199    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:38.134709    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:48:38.134790    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:38.144850    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:48:38.144937    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:38.154961    4019 logs.go:276] 0 containers: []
	W0916 10:48:38.154972    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:38.155045    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:38.165159    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:48:38.165172    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:48:38.165178    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:48:38.182789    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:48:38.182805    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:38.194743    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:38.194753    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:38.199003    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:48:38.199009    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:48:38.212950    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:48:38.212961    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:48:38.224121    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:48:38.224132    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:48:38.239711    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:48:38.239722    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:48:38.251468    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:38.251478    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:38.275131    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:38.275142    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:38.312421    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:38.312440    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:38.351072    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:48:38.351084    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:48:38.366134    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:48:38.366148    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:48:38.378614    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:48:38.378628    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:48:38.055753    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:40.892225    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:43.058231    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:43.058826    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:43.095324    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:43.095499    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:43.117874    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:43.118004    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:43.133215    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:43.133315    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:43.145813    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:43.145910    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:43.156293    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:43.156373    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:43.168990    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:43.169075    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:43.179451    4163 logs.go:276] 0 containers: []
	W0916 10:48:43.179462    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:43.179538    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:43.190038    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:43.190056    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:43.190061    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:43.202763    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:43.202773    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:43.242226    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:43.242236    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:43.246649    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:43.246657    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:43.261465    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:43.261475    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:43.273030    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:43.273043    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:43.289409    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:43.289423    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:43.300458    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:43.300469    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:43.312739    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:43.312750    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:43.349491    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:43.349507    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:43.387689    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:43.387699    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:43.409025    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:43.409038    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:43.420713    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:43.420726    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:43.445537    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:43.445545    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:43.459457    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:43.459467    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:43.474600    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:43.474611    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:43.493087    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:43.493101    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:46.012170    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:45.894419    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:45.894724    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:45.920281    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:48:45.920427    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:45.937279    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:48:45.937370    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:45.950650    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:48:45.950743    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:45.961520    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:48:45.961600    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:45.971850    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:48:45.971931    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:45.982046    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:48:45.982128    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:45.991796    4019 logs.go:276] 0 containers: []
	W0916 10:48:45.991807    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:45.991883    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:46.002589    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:48:46.002602    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:46.002608    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:46.037776    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:46.037783    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:46.075420    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:48:46.075434    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:48:46.089270    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:48:46.089283    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:48:46.101545    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:48:46.101560    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:48:46.121700    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:48:46.121712    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:48:46.139169    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:46.139182    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:46.163186    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:46.163196    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:46.170248    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:48:46.170254    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:48:46.184697    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:48:46.184710    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:48:46.196740    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:48:46.196751    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:48:46.208039    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:48:46.208049    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:48:46.222706    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:48:46.222717    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:51.014167    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:51.014322    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:51.030212    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:51.030303    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:51.042472    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:51.042592    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:51.053233    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:51.053337    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:51.064083    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:51.064169    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:51.074825    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:51.074902    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:51.087484    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:51.087573    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:51.098024    4163 logs.go:276] 0 containers: []
	W0916 10:48:51.098038    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:51.098115    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:51.108459    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:51.108476    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:51.108484    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:51.147286    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:51.147296    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:51.166841    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:51.166854    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:51.181056    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:51.181067    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:51.196661    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:51.196675    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:51.200860    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:51.200870    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:51.215026    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:51.215040    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:51.226925    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:51.226935    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:51.263178    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:51.263188    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:51.287069    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:51.287080    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:51.324883    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:51.324897    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:51.335988    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:51.336002    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:51.347722    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:51.347733    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:51.363249    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:51.363260    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:51.386381    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:51.386395    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:51.398198    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:51.398210    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:51.409352    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:51.409364    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:48.734776    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:53.921153    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:53.737485    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:53.737830    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:53.763783    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:48:53.763892    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:53.779713    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:48:53.779851    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:53.793342    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:48:53.793436    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:53.808103    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:48:53.808186    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:53.819065    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:48:53.819153    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:53.830759    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:48:53.830840    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:53.845951    4019 logs.go:276] 0 containers: []
	W0916 10:48:53.845963    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:53.846034    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:53.858811    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:48:53.858827    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:48:53.858835    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:48:53.870859    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:53.870872    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:53.908638    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:48:53.908654    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:48:53.924340    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:48:53.924351    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:48:53.938173    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:48:53.938184    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:48:53.951205    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:48:53.951215    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:48:53.962481    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:48:53.962490    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:48:53.981083    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:53.981096    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:54.005873    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:48:54.005883    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:54.017149    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:54.017159    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:54.053668    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:54.053677    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:54.057898    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:48:54.057907    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:48:54.069092    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:48:54.069103    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:48:56.585764    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:58.923233    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:58.923533    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:58.946274    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:58.946414    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:58.962065    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:58.962171    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:58.975224    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:58.975320    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:58.987182    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:58.987268    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:58.997815    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:58.997892    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:59.008166    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:59.008255    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:59.017934    4163 logs.go:276] 0 containers: []
	W0916 10:48:59.017945    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:59.018017    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:59.028106    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:59.028123    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:59.028129    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:59.041876    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:59.041886    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:59.053827    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:59.053838    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:59.073397    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:59.073407    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:59.097815    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:59.097822    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:59.109737    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:59.109747    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:59.147779    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:59.147791    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:59.161583    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:59.161594    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:59.172842    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:59.172853    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:59.184913    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:59.184928    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:59.196295    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:59.196306    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:59.210316    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:59.210326    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:59.221289    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:59.221299    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:59.258924    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:59.258933    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:59.263320    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:59.263330    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:59.300095    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:59.300111    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:59.321261    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:59.321272    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:01.839065    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:01.587447    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:01.587644    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:01.602554    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:01.602652    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:01.614393    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:01.614477    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:01.625137    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:49:01.625214    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:01.635658    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:01.635740    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:01.646530    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:01.646622    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:01.657048    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:01.657127    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:01.667131    4019 logs.go:276] 0 containers: []
	W0916 10:49:01.667142    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:01.667212    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:01.677592    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:01.677608    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:01.677614    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:01.682111    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:01.682118    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:01.696783    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:01.696792    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:01.711141    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:01.711151    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:01.735985    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:01.735992    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:01.753796    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:01.753811    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:01.791051    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:01.791061    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:01.832180    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:01.832196    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:01.847277    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:01.847286    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:01.862161    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:01.862170    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:01.873998    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:01.874011    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:01.885427    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:01.885437    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:01.897695    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:01.897710    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:06.841103    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:06.841243    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:06.852664    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:06.852746    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:06.862999    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:06.863073    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:06.873259    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:06.873343    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:06.883787    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:06.883864    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:06.894154    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:06.894237    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:06.906168    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:06.906240    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:06.915864    4163 logs.go:276] 0 containers: []
	W0916 10:49:06.915878    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:06.915951    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:06.926302    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:06.926318    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:06.926333    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:06.930751    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:06.930758    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:06.944485    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:06.944495    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:06.956386    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:06.956396    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:06.967657    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:06.967668    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:06.982503    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:06.982517    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:06.994438    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:06.994449    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:07.006757    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:07.006767    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:07.050219    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:07.050231    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:07.068478    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:07.068487    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:07.079784    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:07.079794    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:07.098191    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:07.098202    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:07.135903    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:07.135916    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:07.147844    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:07.147855    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:07.185935    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:07.185943    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:07.203403    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:07.203415    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:07.218904    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:07.218917    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:04.411752    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:09.745919    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:09.414298    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:09.414605    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:09.441191    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:09.441343    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:09.458316    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:09.458410    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:09.471538    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:49:09.471626    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:09.483980    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:09.484076    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:09.495908    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:09.496004    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:09.507155    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:09.507253    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:09.519311    4019 logs.go:276] 0 containers: []
	W0916 10:49:09.519325    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:09.519407    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:09.531341    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:09.531357    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:09.531363    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:09.543774    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:09.543788    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:09.579983    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:09.579995    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:09.594639    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:09.594651    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:09.609011    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:09.609022    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:09.620456    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:09.620467    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:09.639029    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:09.639040    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:09.662628    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:09.662636    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:09.674420    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:09.674431    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:09.710672    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:09.710680    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:09.714894    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:09.714902    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:09.731548    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:09.731564    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:09.743936    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:09.743947    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:12.264143    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:14.747040    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:14.747172    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:14.758252    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:14.758349    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:14.768706    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:14.768791    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:14.779156    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:14.779236    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:14.789610    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:14.789688    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:14.804663    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:14.804752    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:14.815375    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:14.815454    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:14.829628    4163 logs.go:276] 0 containers: []
	W0916 10:49:14.829642    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:14.829717    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:14.840611    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:14.840628    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:14.840635    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:14.875516    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:14.875526    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:14.888564    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:14.888575    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:14.900351    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:14.900360    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:14.915666    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:14.915677    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:14.933195    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:14.933204    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:14.945161    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:14.945172    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:14.983826    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:14.983833    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:14.988771    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:14.988778    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:15.000639    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:15.000653    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:15.015605    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:15.015619    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:15.026963    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:15.026976    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:15.063928    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:15.063938    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:15.079006    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:15.079020    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:15.096394    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:15.096406    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:15.120033    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:15.120046    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:15.134176    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:15.134190    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:17.264652    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:17.264882    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:17.280199    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:17.280301    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:17.292933    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:17.293026    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:17.304127    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:49:17.304212    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:17.313972    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:17.314056    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:17.324045    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:17.324126    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:17.334826    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:17.334910    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:17.345833    4019 logs.go:276] 0 containers: []
	W0916 10:49:17.345849    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:17.345922    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:17.356345    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:17.356361    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:17.356367    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:17.370049    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:17.370059    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:17.383838    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:17.383852    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:17.398524    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:17.398534    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:17.410674    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:17.410688    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:17.422084    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:17.422095    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:17.446890    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:17.446898    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:17.451712    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:17.451721    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:17.486048    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:17.486059    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:17.497795    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:17.497806    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:17.508980    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:17.508992    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:17.536937    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:17.536949    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:17.549475    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:17.549488    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:17.650922    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:20.088971    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:22.651261    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:22.651410    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:22.665608    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:22.665710    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:22.678090    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:22.678175    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:22.689017    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:22.689105    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:22.699851    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:22.699941    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:22.710030    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:22.710103    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:22.724914    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:22.724991    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:22.735108    4163 logs.go:276] 0 containers: []
	W0916 10:49:22.735118    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:22.735180    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:22.750291    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:22.750314    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:22.750319    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:22.761565    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:22.761577    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:22.765741    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:22.765748    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:22.777430    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:22.777441    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:22.795519    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:22.795528    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:22.809849    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:22.809858    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:22.823524    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:22.823533    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:22.838905    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:22.838915    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:22.854495    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:22.854507    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:22.877346    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:22.877355    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:22.888984    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:22.888995    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:22.923053    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:22.923064    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:22.961574    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:22.961585    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:22.977099    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:22.977109    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:22.991356    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:22.991369    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:23.030411    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:23.030424    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:23.043827    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:23.043837    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:25.557050    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:25.090758    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:25.090903    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:25.105225    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:25.105320    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:25.118066    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:25.118147    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:25.128994    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:49:25.129076    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:25.139502    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:25.139587    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:25.150445    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:25.150531    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:25.160902    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:25.160989    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:25.172708    4019 logs.go:276] 0 containers: []
	W0916 10:49:25.172720    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:25.172796    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:25.187869    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:25.187885    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:25.187891    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:25.205084    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:25.205096    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:25.222307    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:25.222317    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:25.234233    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:25.234244    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:25.239273    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:25.239283    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:25.253235    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:25.253247    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:25.268006    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:25.268017    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:25.280047    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:25.280059    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:25.291768    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:25.291779    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:25.303336    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:25.303349    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:25.326218    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:25.326224    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:25.337568    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:25.337583    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:25.373189    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:25.373198    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:27.915365    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:30.559030    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:30.559326    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:30.589461    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:30.589615    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:30.607352    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:30.607454    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:30.627401    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:30.627502    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:30.638711    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:30.638792    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:30.651240    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:30.651326    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:30.666218    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:30.666305    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:30.676849    4163 logs.go:276] 0 containers: []
	W0916 10:49:30.676859    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:30.676922    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:30.688363    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:30.688380    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:30.688385    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:30.725910    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:30.725919    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:30.737509    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:30.737520    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:30.751878    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:30.751888    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:30.769557    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:30.769568    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:30.780480    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:30.780493    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:30.793511    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:30.793522    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:30.808135    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:30.808145    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:30.832322    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:30.832335    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:30.836484    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:30.836490    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:30.874240    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:30.874252    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:30.885409    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:30.885421    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:30.922837    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:30.922848    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:30.937183    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:30.937194    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:30.948953    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:30.948964    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:30.972099    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:30.972110    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:30.991580    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:30.991590    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:32.917497    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:32.917714    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:32.932452    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:32.932547    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:32.945141    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:32.945229    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:32.956089    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:49:32.956172    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:32.966381    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:32.966460    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:32.977971    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:32.978053    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:32.995925    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:32.996010    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:33.005938    4019 logs.go:276] 0 containers: []
	W0916 10:49:33.005949    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:33.006010    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:33.016215    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:33.016229    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:33.016236    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:33.027218    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:33.027228    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:33.038865    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:33.038878    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:33.056383    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:33.056395    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:33.091860    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:33.091868    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:33.109249    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:33.109259    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:33.123499    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:33.123509    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:33.137632    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:33.137645    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:33.149176    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:33.149187    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:33.174402    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:33.174411    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:33.185993    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:33.186004    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:33.190791    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:33.190797    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:33.226026    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:33.226037    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:33.507952    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:35.739786    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:38.509348    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:38.509663    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:38.531243    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:38.531363    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:38.551301    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:38.551402    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:38.562773    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:38.562864    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:38.573438    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:38.573522    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:38.583823    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:38.583908    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:38.594466    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:38.594549    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:38.609214    4163 logs.go:276] 0 containers: []
	W0916 10:49:38.609229    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:38.609304    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:38.622995    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:38.623013    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:38.623019    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:38.634100    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:38.634111    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:38.677730    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:38.677743    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:38.724828    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:38.724843    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:38.738295    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:38.738309    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:38.753352    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:38.753362    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:38.767792    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:38.767801    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:38.806967    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:38.806976    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:38.811124    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:38.811131    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:38.822593    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:38.822602    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:38.835482    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:38.835493    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:38.847651    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:38.847662    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:38.858995    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:38.859007    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:38.883437    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:38.883444    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:38.897160    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:38.897169    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:38.913123    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:38.913134    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:38.931968    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:38.931982    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:41.451506    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:40.741886    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:40.742025    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:40.753578    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:40.753669    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:40.763995    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:40.764088    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:40.774520    4019 logs.go:276] 2 containers: [af22ba76198b c1a6f8529ee6]
	I0916 10:49:40.774595    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:40.785284    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:40.785370    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:40.799018    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:40.799106    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:40.809714    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:40.809787    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:40.819984    4019 logs.go:276] 0 containers: []
	W0916 10:49:40.819996    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:40.820060    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:40.830726    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:40.830740    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:40.830745    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:40.848062    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:40.848074    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:40.884285    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:40.884299    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:40.889098    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:40.889106    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:40.903093    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:40.903103    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:40.919618    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:40.919630    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:40.931000    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:40.931010    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:40.942447    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:40.942456    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:40.954212    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:40.954225    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:40.978448    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:40.978458    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:41.013045    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:41.013055    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:41.032098    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:41.032108    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:41.043564    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:41.043577    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:43.557292    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:46.453741    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:46.453970    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:46.479508    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:46.479635    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:46.495457    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:46.495555    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:46.507433    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:46.507529    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:46.518607    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:46.518690    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:46.546205    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:46.546293    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:46.561704    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:46.561786    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:46.571655    4163 logs.go:276] 0 containers: []
	W0916 10:49:46.571669    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:46.571735    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:46.582299    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:46.582317    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:46.582323    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:46.619088    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:46.619098    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:46.632882    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:46.632893    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:46.647357    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:46.647367    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:46.662455    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:46.662465    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:46.675558    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:46.675570    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:46.679867    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:46.679873    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:46.719015    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:46.719054    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:46.737708    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:46.737719    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:46.772766    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:46.772783    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:46.791311    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:46.791325    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:46.808411    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:46.808425    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:46.822496    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:46.822511    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:46.833237    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:46.833249    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:46.856045    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:46.856053    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:46.867677    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:46.867690    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:46.879481    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:46.879491    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:48.559411    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:48.559692    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:48.581357    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:48.581483    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:48.596936    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:48.597032    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:48.609453    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:49:48.609542    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:48.620239    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:48.620318    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:48.630439    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:48.630520    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:48.641284    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:48.641369    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:48.651669    4019 logs.go:276] 0 containers: []
	W0916 10:49:48.651680    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:48.651745    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:48.662189    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:48.662206    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:48.662212    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:48.675855    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:49:48.675865    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:49:48.687611    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:48.687622    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:49.393326    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:48.704406    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:48.704420    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:48.731825    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:48.731838    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:48.737968    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:48.737980    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:48.775526    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:48.775537    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:48.790268    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:48.790277    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:48.807740    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:48.807754    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:48.821645    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:48.821657    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:48.833265    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:48.833281    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:48.844810    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:49:48.844823    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:49:48.856456    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:48.856466    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:48.867359    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:48.867373    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:48.902598    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:48.902609    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:51.416479    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:54.395693    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:54.396195    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:54.426335    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:54.426491    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:54.444733    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:54.444847    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:54.458588    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:54.458674    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:54.471682    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:54.471754    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:54.482769    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:54.482852    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:54.493113    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:54.493194    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:54.507187    4163 logs.go:276] 0 containers: []
	W0916 10:49:54.507199    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:54.507265    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:54.517192    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:54.517212    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:54.517217    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:54.555909    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:54.555922    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:54.561091    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:54.561098    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:54.584981    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:54.584991    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:54.599411    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:54.599422    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:54.623867    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:54.623876    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:54.639077    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:54.639091    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:54.681485    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:54.681497    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:54.693361    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:54.693374    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:54.705365    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:54.705378    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:54.740072    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:54.740092    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:54.763414    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:54.763428    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:54.787471    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:54.787483    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:54.799404    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:54.799417    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:54.811672    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:54.811684    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:54.826229    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:54.826239    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:54.838112    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:54.838122    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:57.357295    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:56.418740    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:56.419363    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:56.457852    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:49:56.458021    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:56.479648    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:49:56.479769    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:56.494411    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:49:56.494497    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:56.506450    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:49:56.506529    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:56.516933    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:49:56.517013    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:56.527666    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:49:56.527748    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:56.537957    4019 logs.go:276] 0 containers: []
	W0916 10:49:56.537968    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:56.538038    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:56.554779    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:49:56.554801    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:49:56.554808    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:49:56.566596    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:49:56.566607    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:49:56.578500    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:49:56.578511    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:49:56.595488    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:56.595500    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:56.619530    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:49:56.619538    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:49:56.631235    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:49:56.631245    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:49:56.645544    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:56.645554    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:56.649774    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:56.649782    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:56.685602    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:49:56.685617    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:49:56.700263    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:49:56.700273    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:49:56.714191    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:56.714207    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:56.749862    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:49:56.749871    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:49:56.774455    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:49:56.774467    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:49:56.796315    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:49:56.796328    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:49:56.812410    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:49:56.812422    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:02.359526    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:02.359848    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:02.388091    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:50:02.388221    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:02.405551    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:50:02.405647    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:02.418879    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:50:02.418966    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:02.429613    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:50:02.429717    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:02.443886    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:50:02.443971    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:02.454709    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:50:02.454794    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:59.327488    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:02.464967    4163 logs.go:276] 0 containers: []
	W0916 10:50:02.464980    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:02.465051    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:02.476276    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:50:02.476295    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:02.476301    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:02.500670    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:50:02.500679    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:02.512394    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:02.512408    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:02.554688    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:50:02.554702    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:50:02.568666    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:50:02.568680    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:50:02.583950    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:50:02.583963    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:50:02.595580    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:50:02.595590    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:50:02.611096    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:50:02.611105    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:50:02.626052    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:02.626066    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:02.663382    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:50:02.663395    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:50:02.702502    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:50:02.702514    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:50:02.714522    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:50:02.714535    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:50:02.728602    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:50:02.728616    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:50:02.746131    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:50:02.746142    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:50:02.758402    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:02.758412    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:02.762282    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:50:02.762288    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:50:02.782733    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:50:02.782745    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:50:05.305688    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:04.329982    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:04.330168    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:04.346596    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:04.346694    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:04.359364    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:04.359442    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:04.372021    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:04.372106    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:04.382686    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:04.382767    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:04.392928    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:04.393005    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:04.403805    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:04.403882    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:04.414120    4019 logs.go:276] 0 containers: []
	W0916 10:50:04.414130    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:04.414203    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:04.423946    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:04.423963    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:04.423968    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:04.438166    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:04.438180    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:04.449891    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:04.449906    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:04.461551    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:04.461564    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:04.479689    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:04.479703    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:04.491318    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:04.491332    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:04.526873    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:04.526881    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:04.562752    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:04.562768    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:04.574880    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:04.574890    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:04.599524    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:04.599531    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:04.604331    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:04.604340    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:04.615598    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:04.615610    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:04.640423    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:04.640436    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:04.654888    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:04.654899    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:04.670287    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:04.670298    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:07.184584    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:10.307697    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:10.307876    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:10.321711    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:50:10.321813    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:10.333379    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:50:10.333463    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:10.344032    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:50:10.344118    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:10.354526    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:50:10.354616    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:10.365080    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:50:10.365163    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:10.376535    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:50:10.376618    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:10.387107    4163 logs.go:276] 0 containers: []
	W0916 10:50:10.387120    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:10.387194    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:10.397746    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:50:10.397765    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:10.397771    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:10.402079    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:50:10.402085    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:50:10.415923    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:50:10.415933    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:50:10.427929    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:50:10.427940    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:50:10.443501    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:10.443515    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:10.483221    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:50:10.483231    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:50:10.494718    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:50:10.494729    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:50:10.513223    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:50:10.513237    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:50:10.534542    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:10.534555    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:10.559704    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:50:10.559714    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:50:10.573484    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:50:10.573499    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:50:10.585133    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:50:10.585148    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:50:10.601465    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:50:10.601478    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:50:10.613114    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:50:10.613127    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:50:10.624211    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:50:10.624223    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:10.636303    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:10.636319    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:10.670955    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:50:10.670970    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:50:12.186320    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:12.186567    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:12.212657    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:12.212813    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:12.232050    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:12.232145    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:12.247444    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:12.247533    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:12.263016    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:12.263093    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:12.273624    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:12.273705    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:12.284186    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:12.284262    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:12.294161    4019 logs.go:276] 0 containers: []
	W0916 10:50:12.294176    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:12.294249    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:12.304639    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:12.304655    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:12.304661    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:12.343846    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:12.343856    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:12.358426    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:12.358436    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:12.394342    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:12.394355    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:12.408803    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:12.408815    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:12.420170    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:12.420184    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:12.431783    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:12.431796    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:12.443732    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:12.443746    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:12.448674    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:12.448682    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:12.460178    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:12.460191    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:12.477739    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:12.477752    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:12.489425    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:12.489441    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:12.505570    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:12.505581    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:12.519622    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:12.519633    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:12.531212    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:12.531223    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:13.220403    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:15.059349    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:18.222393    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:18.222688    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:18.245733    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:50:18.245870    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:18.261907    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:50:18.262018    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:18.275218    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:50:18.275313    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:18.286257    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:50:18.286339    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:18.296968    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:50:18.297053    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:18.307312    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:50:18.307396    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:18.317713    4163 logs.go:276] 0 containers: []
	W0916 10:50:18.317725    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:18.317801    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:18.327849    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:50:18.327866    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:50:18.327872    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:50:18.339589    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:18.339600    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:18.379608    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:18.379619    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:18.415196    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:50:18.415213    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:50:18.457014    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:50:18.457025    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:50:18.471230    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:50:18.471245    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:50:18.486311    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:50:18.486325    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:50:18.497813    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:50:18.497826    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:50:18.511366    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:50:18.511379    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:50:18.526153    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:50:18.526167    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:50:18.548857    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:50:18.548868    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:50:18.560416    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:50:18.560429    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:18.571922    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:18.571935    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:18.595409    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:18.595415    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:18.600070    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:50:18.600076    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:50:18.614560    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:50:18.614575    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:50:18.629149    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:50:18.629164    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:50:21.143473    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:20.061769    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:20.061954    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:20.081968    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:20.082079    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:20.098734    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:20.098823    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:20.110914    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:20.110990    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:20.122289    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:20.122372    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:20.133038    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:20.133113    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:20.143237    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:20.143316    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:20.153607    4019 logs.go:276] 0 containers: []
	W0916 10:50:20.153618    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:20.153682    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:20.170077    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:20.170097    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:20.170104    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:20.182976    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:20.182990    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:20.198620    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:20.198631    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:20.216249    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:20.216260    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:20.228796    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:20.228810    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:20.240582    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:20.240595    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:20.262925    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:20.262936    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:20.274260    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:20.274271    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:20.285673    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:20.285683    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:20.299494    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:20.299504    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:20.336521    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:20.336528    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:20.340911    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:20.340918    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:20.375749    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:20.375762    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:20.390194    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:20.390204    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:20.401642    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:20.401652    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:22.927604    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:26.145710    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:26.146018    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:26.169837    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:50:26.169987    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:26.186545    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:50:26.186644    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:26.205400    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:50:26.205490    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:26.217976    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:50:26.218063    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:26.228293    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:50:26.228402    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:26.239180    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:50:26.239266    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:26.249587    4163 logs.go:276] 0 containers: []
	W0916 10:50:26.249598    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:26.249674    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:26.260743    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:50:26.260761    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:50:26.260766    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:50:26.272638    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:26.272652    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:26.295613    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:50:26.295620    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:26.307707    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:50:26.307722    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:50:26.319932    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:50:26.319944    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:50:26.331548    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:26.331561    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:26.335767    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:50:26.335774    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:50:26.374848    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:50:26.374857    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:50:26.386568    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:50:26.386582    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:50:26.408525    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:50:26.408538    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:50:26.423585    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:26.423599    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:26.462676    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:50:26.462685    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:50:26.480957    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:50:26.480970    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:50:26.495883    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:50:26.495898    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:50:26.507600    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:26.507609    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:26.543463    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:50:26.543476    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:50:26.558222    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:50:26.558237    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:50:27.930107    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:27.930403    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:27.956285    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:27.956438    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:27.991505    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:27.991587    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:28.007932    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:28.008020    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:28.023878    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:28.023966    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:28.035280    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:28.035361    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:28.048854    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:28.048938    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:28.060723    4019 logs.go:276] 0 containers: []
	W0916 10:50:28.060736    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:28.060807    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:28.070898    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:28.070914    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:28.070920    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:28.106344    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:28.106355    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:28.121057    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:28.121068    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:28.138274    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:28.138291    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:28.149689    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:28.149699    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:28.153869    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:28.153878    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:28.167203    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:28.167219    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:28.180487    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:28.180500    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:28.192577    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:28.192588    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:28.207031    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:28.207045    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:28.218978    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:28.218990    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:28.242837    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:28.242847    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:28.278392    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:28.278401    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:28.299919    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:28.299931    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:28.311569    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:28.311583    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:29.075456    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:30.825242    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:34.077557    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:34.077723    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:34.094797    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:50:34.094892    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:34.111549    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:50:34.111635    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:34.127803    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:50:34.127891    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:34.138844    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:50:34.138940    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:34.153426    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:50:34.153507    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:34.164179    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:50:34.164263    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:34.175040    4163 logs.go:276] 0 containers: []
	W0916 10:50:34.175051    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:34.175128    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:34.186288    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:50:34.186307    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:50:34.186313    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:50:34.197679    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:50:34.197691    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:50:34.209645    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:50:34.209654    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:50:34.226893    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:50:34.226902    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:50:34.240952    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:50:34.240968    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:50:34.252706    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:50:34.252718    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:50:34.264815    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:50:34.264827    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:50:34.279095    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:50:34.279106    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:50:34.317314    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:50:34.317324    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:50:34.332176    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:50:34.332186    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:50:34.349223    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:50:34.349237    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:50:34.365922    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:34.365932    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:34.404811    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:34.404820    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:34.408781    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:34.408790    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:34.442836    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:50:34.442848    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:50:34.457703    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:34.457714    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:34.479190    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:50:34.479198    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:36.993160    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:35.827652    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:35.827905    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:35.853080    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:35.853202    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:35.868834    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:35.868920    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:35.882267    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:35.882341    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:35.893489    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:35.893570    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:35.904475    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:35.904557    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:35.915349    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:35.915429    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:35.925487    4019 logs.go:276] 0 containers: []
	W0916 10:50:35.925499    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:35.925571    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:35.935731    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:35.935749    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:35.935755    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:35.961201    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:35.961212    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:35.972782    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:35.972794    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:35.984161    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:35.984172    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:35.999441    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:35.999456    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:36.011920    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:36.011932    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:36.025431    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:36.025442    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:36.037428    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:36.037440    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:36.051065    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:36.051077    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:36.062756    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:36.062768    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:36.074281    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:36.074293    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:36.097611    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:36.097622    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:36.133204    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:36.133214    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:36.137552    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:36.137559    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:36.177017    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:36.177028    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:38.692711    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:41.994860    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:41.995114    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:42.013471    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:50:42.013589    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:42.027320    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:50:42.027406    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:42.039105    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:50:42.039186    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:42.049902    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:50:42.049981    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:42.060364    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:50:42.060441    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:42.070804    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:50:42.070872    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:42.081141    4163 logs.go:276] 0 containers: []
	W0916 10:50:42.081154    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:42.081231    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:42.091635    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:50:42.091653    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:42.091661    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:42.096002    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:50:42.096010    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:50:42.112640    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:50:42.112650    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:50:42.129275    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:50:42.129286    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:50:42.145006    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:42.145017    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:42.168325    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:50:42.168333    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:42.180244    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:42.180254    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:42.219411    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:50:42.219421    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:50:42.234107    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:50:42.234118    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:50:42.280211    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:50:42.280222    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:50:42.292057    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:50:42.292068    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:50:42.304565    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:50:42.304575    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:50:42.315937    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:50:42.315950    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:50:42.326725    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:50:42.326735    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:50:42.341176    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:50:42.341188    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:50:42.355370    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:50:42.355379    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:50:42.373216    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:42.373226    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:43.693648    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:43.693853    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:44.915151    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:43.707759    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:43.707849    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:43.725702    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:43.725789    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:43.736408    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:43.736487    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:43.747111    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:43.747204    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:43.757813    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:43.757899    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:43.768586    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:43.768669    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:43.778929    4019 logs.go:276] 0 containers: []
	W0916 10:50:43.778940    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:43.779014    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:43.789388    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:43.789405    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:43.789411    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:43.803662    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:43.803672    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:43.829040    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:43.829050    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:43.833734    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:43.833741    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:43.870759    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:43.870777    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:43.882815    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:43.882826    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:43.900682    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:43.900692    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:43.913401    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:43.913413    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:43.928373    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:43.928386    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:43.940212    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:43.940221    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:43.952325    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:43.952336    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:43.989286    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:43.989296    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:44.003692    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:44.003707    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:44.015584    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:44.015595    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:44.027149    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:44.027159    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:46.544557    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:49.916041    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:49.916121    4163 kubeadm.go:597] duration metric: took 4m3.9849355s to restartPrimaryControlPlane
	W0916 10:50:49.916180    4163 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0916 10:50:49.916211    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0916 10:50:50.924647    4163 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.008454167s)
	I0916 10:50:50.924742    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:50:50.929755    4163 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:50:50.932466    4163 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:50:50.935225    4163 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:50:50.935232    4163 kubeadm.go:157] found existing configuration files:
	
	I0916 10:50:50.935259    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf
	I0916 10:50:50.938249    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:50:50.938275    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:50:50.940885    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf
	I0916 10:50:50.943419    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:50:50.943448    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:50:50.946519    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf
	I0916 10:50:50.949286    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:50:50.949311    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:50:50.951726    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf
	I0916 10:50:50.954533    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:50:50.954552    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:50:50.957274    4163 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:50:50.975209    4163 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0916 10:50:50.975272    4163 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:50:51.022181    4163 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:50:51.022246    4163 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:50:51.022322    4163 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 10:50:51.072618    4163 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:50:51.078781    4163 out.go:235]   - Generating certificates and keys ...
	I0916 10:50:51.078816    4163 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:50:51.078849    4163 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:50:51.078899    4163 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0916 10:50:51.078932    4163 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0916 10:50:51.078969    4163 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0916 10:50:51.079003    4163 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0916 10:50:51.079038    4163 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0916 10:50:51.079064    4163 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0916 10:50:51.079108    4163 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0916 10:50:51.079152    4163 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0916 10:50:51.079173    4163 kubeadm.go:310] [certs] Using the existing "sa" key
	I0916 10:50:51.079202    4163 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:50:51.143726    4163 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:50:51.260328    4163 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:50:51.364328    4163 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:50:51.511064    4163 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:50:51.542659    4163 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:50:51.542709    4163 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:50:51.542731    4163 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:50:51.626895    4163 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:50:51.633100    4163 out.go:235]   - Booting up control plane ...
	I0916 10:50:51.633155    4163 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:50:51.633203    4163 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:50:51.633237    4163 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:50:51.633288    4163 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:50:51.633379    4163 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 10:50:51.546594    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:51.546692    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:51.557650    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:51.557737    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:51.568034    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:51.568118    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:51.579834    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:51.579919    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:51.590841    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:51.590930    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:51.601394    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:51.601477    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:51.614910    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:51.614992    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:51.626012    4019 logs.go:276] 0 containers: []
	W0916 10:50:51.626024    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:51.626101    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:51.637214    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:51.637235    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:51.637242    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:51.642274    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:51.642287    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:51.660673    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:51.660686    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:51.675803    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:51.675815    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:51.687441    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:51.687452    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:50:51.699541    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:51.699552    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:51.725434    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:51.725442    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:51.738706    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:51.738717    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:51.777276    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:51.777285    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:51.789242    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:51.789253    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:51.801763    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:51.801777    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:51.818337    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:51.818354    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:51.835550    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:51.835564    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:51.873899    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:51.873911    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:51.887982    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:51.887993    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:56.132865    4163 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502296 seconds
	I0916 10:50:56.132994    4163 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:50:56.139814    4163 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:50:56.650478    4163 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:50:56.650594    4163 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-385000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:50:57.154428    4163 kubeadm.go:310] [bootstrap-token] Using token: j84bsm.6jms1j7q43m6h00p
	I0916 10:50:57.157640    4163 out.go:235]   - Configuring RBAC rules ...
	I0916 10:50:57.157693    4163 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:50:57.157731    4163 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:50:57.161211    4163 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:50:57.162028    4163 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:50:57.162809    4163 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:50:57.163582    4163 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:50:57.167042    4163 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:50:57.334185    4163 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:50:57.558563    4163 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:50:57.559023    4163 kubeadm.go:310] 
	I0916 10:50:57.559058    4163 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:50:57.559064    4163 kubeadm.go:310] 
	I0916 10:50:57.559114    4163 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:50:57.559118    4163 kubeadm.go:310] 
	I0916 10:50:57.559132    4163 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:50:57.559182    4163 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:50:57.559208    4163 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:50:57.559212    4163 kubeadm.go:310] 
	I0916 10:50:57.559243    4163 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:50:57.559248    4163 kubeadm.go:310] 
	I0916 10:50:57.559276    4163 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:50:57.559280    4163 kubeadm.go:310] 
	I0916 10:50:57.559309    4163 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:50:57.559348    4163 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:50:57.559401    4163 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:50:57.559405    4163 kubeadm.go:310] 
	I0916 10:50:57.559452    4163 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:50:57.559494    4163 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:50:57.559498    4163 kubeadm.go:310] 
	I0916 10:50:57.559545    4163 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j84bsm.6jms1j7q43m6h00p \
	I0916 10:50:57.559598    4163 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4cbf98c9db407bfd377513d8a979980a7165b5a1a5b1a669b5a690e8302fdda \
	I0916 10:50:57.559610    4163 kubeadm.go:310] 	--control-plane 
	I0916 10:50:57.559615    4163 kubeadm.go:310] 
	I0916 10:50:57.559664    4163 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:50:57.559668    4163 kubeadm.go:310] 
	I0916 10:50:57.559719    4163 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j84bsm.6jms1j7q43m6h00p \
	I0916 10:50:57.559777    4163 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4cbf98c9db407bfd377513d8a979980a7165b5a1a5b1a669b5a690e8302fdda 
	I0916 10:50:57.559892    4163 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:50:57.559985    4163 cni.go:84] Creating CNI manager for ""
	I0916 10:50:57.560001    4163 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:50:57.570498    4163 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:50:57.574486    4163 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:50:57.577662    4163 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:50:57.582629    4163 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:50:57.582683    4163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:50:57.582696    4163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-385000 minikube.k8s.io/updated_at=2024_09_16T10_50_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=stopped-upgrade-385000 minikube.k8s.io/primary=true
	I0916 10:50:57.621029    4163 kubeadm.go:1113] duration metric: took 38.388208ms to wait for elevateKubeSystemPrivileges
	I0916 10:50:57.621038    4163 ops.go:34] apiserver oom_adj: -16
	I0916 10:50:57.621079    4163 kubeadm.go:394] duration metric: took 4m11.703933083s to StartCluster
	I0916 10:50:57.621090    4163 settings.go:142] acquiring lock: {Name:mkcc144e0c413dd8611ee3ccbc8c08f02650f2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:50:57.621184    4163 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:50:57.621587    4163 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/kubeconfig: {Name:mk3766c19461825f7de68cf1dc4ddceadf57e288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:50:57.621799    4163 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:50:57.621809    4163 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:50:57.621845    4163 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-385000"
	I0916 10:50:57.621856    4163 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-385000"
	W0916 10:50:57.621859    4163 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:50:57.621859    4163 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-385000"
	I0916 10:50:57.621869    4163 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-385000"
	I0916 10:50:57.621872    4163 host.go:66] Checking if "stopped-upgrade-385000" exists ...
	I0916 10:50:57.621895    4163 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:50:57.622763    4163 kapi.go:59] client config for stopped-upgrade-385000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/client.key", CAFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104389800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:50:57.622897    4163 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-385000"
	W0916 10:50:57.622902    4163 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:50:57.622909    4163 host.go:66] Checking if "stopped-upgrade-385000" exists ...
	I0916 10:50:57.625813    4163 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:57.625819    4163 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:50:57.625825    4163 sshutil.go:53] new ssh client: &{IP:localhost Port:50487 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/id_rsa Username:docker}
	I0916 10:50:57.625523    4163 out.go:177] * Verifying Kubernetes components...
	I0916 10:50:57.633445    4163 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:50:54.402750    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:57.637524    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:50:57.641442    4163 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:57.641448    4163 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:50:57.641455    4163 sshutil.go:53] new ssh client: &{IP:localhost Port:50487 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/id_rsa Username:docker}
	I0916 10:50:57.712025    4163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:50:57.717089    4163 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:50:57.717140    4163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:50:57.720927    4163 api_server.go:72] duration metric: took 99.120375ms to wait for apiserver process to appear ...
	I0916 10:50:57.720934    4163 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:50:57.720941    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:57.743536    4163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:57.785425    4163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:58.137440    4163 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:50:58.137453    4163 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:50:59.404814    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:59.404938    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:59.416045    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:50:59.416132    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:59.426889    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:50:59.426977    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:59.437835    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:50:59.437917    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:59.448687    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:50:59.448778    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:59.465287    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:50:59.465368    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:59.478682    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:50:59.478764    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:59.489600    4019 logs.go:276] 0 containers: []
	W0916 10:50:59.489616    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:59.489688    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:59.501002    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:50:59.501019    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:50:59.501025    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:50:59.516033    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:50:59.516042    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:50:59.528508    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:50:59.528519    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:50:59.541332    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:50:59.541346    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:59.553774    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:59.553785    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:59.558222    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:59.558228    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:59.595517    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:50:59.595529    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:50:59.609560    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:50:59.609576    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:50:59.625462    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:50:59.625475    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:50:59.643598    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:59.643611    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:59.667913    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:59.667924    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:59.703634    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:50:59.703648    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:50:59.715416    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:50:59.715428    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:50:59.727315    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:50:59.727325    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:50:59.738678    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:50:59.738688    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:51:02.252331    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:02.722870    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:02.722921    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:07.254586    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:07.254726    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:51:07.276975    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:51:07.277069    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:51:07.292233    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:51:07.292313    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:51:07.302665    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:51:07.302752    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:51:07.313575    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:51:07.313658    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:51:07.324059    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:51:07.324139    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:51:07.335082    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:51:07.335167    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:51:07.354887    4019 logs.go:276] 0 containers: []
	W0916 10:51:07.354898    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:51:07.354971    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:51:07.365532    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:51:07.365550    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:51:07.365556    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:51:07.385364    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:51:07.385375    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:51:07.389981    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:51:07.389987    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:51:07.426558    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:51:07.426574    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:51:07.441027    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:51:07.441038    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:51:07.453508    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:51:07.453519    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:51:07.465696    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:51:07.465709    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:51:07.478078    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:51:07.478091    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:51:07.516122    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:51:07.516133    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:51:07.530877    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:51:07.530889    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:51:07.542776    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:51:07.542787    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:51:07.554591    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:51:07.554602    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:51:07.579398    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:51:07.579406    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:51:07.591379    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:51:07.591388    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:51:07.608847    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:51:07.608860    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:51:07.723095    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:07.723120    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:10.122576    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:12.723663    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:12.723703    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:15.123603    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:15.123846    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:51:15.145058    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:51:15.145171    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:51:15.160425    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:51:15.160518    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:51:15.172883    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:51:15.172976    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:51:15.184835    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:51:15.184925    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:51:15.195370    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:51:15.195452    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:51:15.206432    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:51:15.206514    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:51:15.217457    4019 logs.go:276] 0 containers: []
	W0916 10:51:15.217468    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:51:15.217541    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:51:15.228239    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:51:15.228258    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:51:15.228263    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:51:15.232787    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:51:15.232794    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:51:15.248314    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:51:15.248325    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:51:15.262384    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:51:15.262397    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:51:15.275522    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:51:15.275533    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:51:15.298100    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:51:15.298115    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:51:15.323101    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:51:15.323115    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:51:15.334804    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:51:15.334818    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:51:15.369511    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:51:15.369526    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:51:15.387716    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:51:15.387726    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:51:15.399770    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:51:15.399780    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:51:15.411649    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:51:15.411661    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:51:15.426799    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:51:15.426811    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:51:15.439062    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:51:15.439074    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:51:15.451310    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:51:15.451321    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:51:17.989023    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:17.724177    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:17.724215    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:22.991323    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:22.991562    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:51:23.018793    4019 logs.go:276] 1 containers: [43c66cee0871]
	I0916 10:51:23.018930    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:51:23.036392    4019 logs.go:276] 1 containers: [4d35cfd047f9]
	I0916 10:51:23.036489    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:51:23.049153    4019 logs.go:276] 4 containers: [feb64a0b1c75 e84c020eeb1e af22ba76198b c1a6f8529ee6]
	I0916 10:51:23.049246    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:51:23.061551    4019 logs.go:276] 1 containers: [e4004b0878ea]
	I0916 10:51:23.061654    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:51:23.072126    4019 logs.go:276] 1 containers: [0da0b18bf25a]
	I0916 10:51:23.072196    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:51:23.082993    4019 logs.go:276] 1 containers: [904c154b318d]
	I0916 10:51:23.083061    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:51:23.093198    4019 logs.go:276] 0 containers: []
	W0916 10:51:23.093211    4019 logs.go:278] No container was found matching "kindnet"
	I0916 10:51:23.093280    4019 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:51:23.103584    4019 logs.go:276] 1 containers: [99cd5cffce2f]
	I0916 10:51:23.103599    4019 logs.go:123] Gathering logs for storage-provisioner [99cd5cffce2f] ...
	I0916 10:51:23.103605    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99cd5cffce2f"
	I0916 10:51:23.114937    4019 logs.go:123] Gathering logs for Docker ...
	I0916 10:51:23.114948    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:51:23.138420    4019 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:51:23.138427    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:51:23.173221    4019 logs.go:123] Gathering logs for etcd [4d35cfd047f9] ...
	I0916 10:51:23.173233    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d35cfd047f9"
	I0916 10:51:23.187281    4019 logs.go:123] Gathering logs for coredns [e84c020eeb1e] ...
	I0916 10:51:23.187292    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e84c020eeb1e"
	I0916 10:51:23.198888    4019 logs.go:123] Gathering logs for kube-controller-manager [904c154b318d] ...
	I0916 10:51:23.198899    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904c154b318d"
	I0916 10:51:23.217004    4019 logs.go:123] Gathering logs for coredns [c1a6f8529ee6] ...
	I0916 10:51:23.217015    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a6f8529ee6"
	I0916 10:51:23.230071    4019 logs.go:123] Gathering logs for kubelet ...
	I0916 10:51:23.230081    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:51:23.266476    4019 logs.go:123] Gathering logs for dmesg ...
	I0916 10:51:23.266490    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:51:23.271712    4019 logs.go:123] Gathering logs for kube-apiserver [43c66cee0871] ...
	I0916 10:51:23.271718    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43c66cee0871"
	I0916 10:51:23.286053    4019 logs.go:123] Gathering logs for coredns [feb64a0b1c75] ...
	I0916 10:51:23.286068    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 feb64a0b1c75"
	I0916 10:51:23.297457    4019 logs.go:123] Gathering logs for coredns [af22ba76198b] ...
	I0916 10:51:23.297467    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af22ba76198b"
	I0916 10:51:23.314856    4019 logs.go:123] Gathering logs for kube-scheduler [e4004b0878ea] ...
	I0916 10:51:23.314869    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4004b0878ea"
	I0916 10:51:23.333887    4019 logs.go:123] Gathering logs for kube-proxy [0da0b18bf25a] ...
	I0916 10:51:23.333896    4019 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0da0b18bf25a"
	I0916 10:51:23.345283    4019 logs.go:123] Gathering logs for container status ...
	I0916 10:51:23.345295    4019 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:51:22.724861    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:22.724897    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:27.725894    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:27.725932    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0916 10:51:28.139279    4163 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0916 10:51:28.147867    4163 out.go:177] * Enabled addons: storage-provisioner
	I0916 10:51:25.857497    4019 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:30.859559    4019 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:30.865086    4019 out.go:201] 
	W0916 10:51:30.869040    4019 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0916 10:51:30.869048    4019 out.go:270] * 
	W0916 10:51:30.869647    4019 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:51:30.882955    4019 out.go:201] 
	I0916 10:51:28.155785    4163 addons.go:510] duration metric: took 30.534887167s for enable addons: enabled=[storage-provisioner]
	I0916 10:51:32.726118    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:32.726160    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:37.727455    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:37.727502    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-09-16 17:42:33 UTC, ends at Mon 2024-09-16 17:51:46 UTC. --
	Sep 16 17:51:33 running-upgrade-707000 dockerd[3005]: time="2024-09-16T17:51:33.394803956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 17:51:33 running-upgrade-707000 dockerd[3005]: time="2024-09-16T17:51:33.394861742Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/26efc6c044a865276610a1aefbccb222d389ba0ffda15c3f2b0063b36f381a54 pid=19030 runtime=io.containerd.runc.v2
	Sep 16 17:51:33 running-upgrade-707000 dockerd[3005]: time="2024-09-16T17:51:33.395074886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 17:51:33 running-upgrade-707000 dockerd[3005]: time="2024-09-16T17:51:33.395088260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 17:51:33 running-upgrade-707000 dockerd[3005]: time="2024-09-16T17:51:33.395092968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 17:51:33 running-upgrade-707000 dockerd[3005]: time="2024-09-16T17:51:33.395141087Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/bf8903d17880212c8b52d42c1df948f47b0e38414d4535614e035057e0a37f73 pid=19031 runtime=io.containerd.runc.v2
	Sep 16 17:51:33 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:33Z" level=error msg="ContainerStats resp: {0x4000594700 linux}"
	Sep 16 17:51:33 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:33Z" level=error msg="ContainerStats resp: {0x4000594840 linux}"
	Sep 16 17:51:33 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:33Z" level=error msg="ContainerStats resp: {0x40004f76c0 linux}"
	Sep 16 17:51:33 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:33Z" level=error msg="ContainerStats resp: {0x4000595640 linux}"
	Sep 16 17:51:33 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:33Z" level=error msg="ContainerStats resp: {0x4000595b80 linux}"
	Sep 16 17:51:33 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:33Z" level=error msg="ContainerStats resp: {0x40006c6600 linux}"
	Sep 16 17:51:33 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:33Z" level=error msg="ContainerStats resp: {0x4000356ac0 linux}"
	Sep 16 17:51:37 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:37Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 16 17:51:42 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:42Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 16 17:51:43 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:43Z" level=error msg="ContainerStats resp: {0x40008da380 linux}"
	Sep 16 17:51:43 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:43Z" level=error msg="ContainerStats resp: {0x40008da4c0 linux}"
	Sep 16 17:51:44 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:44Z" level=error msg="ContainerStats resp: {0x40008dae40 linux}"
	Sep 16 17:51:45 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:45Z" level=error msg="ContainerStats resp: {0x4000356440 linux}"
	Sep 16 17:51:45 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:45Z" level=error msg="ContainerStats resp: {0x4000356ac0 linux}"
	Sep 16 17:51:45 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:45Z" level=error msg="ContainerStats resp: {0x4000357180 linux}"
	Sep 16 17:51:45 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:45Z" level=error msg="ContainerStats resp: {0x4000594040 linux}"
	Sep 16 17:51:45 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:45Z" level=error msg="ContainerStats resp: {0x4000594780 linux}"
	Sep 16 17:51:45 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:45Z" level=error msg="ContainerStats resp: {0x40009823c0 linux}"
	Sep 16 17:51:45 running-upgrade-707000 cri-dockerd[2846]: time="2024-09-16T17:51:45Z" level=error msg="ContainerStats resp: {0x40005951c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	26efc6c044a86       edaa71f2aee88       13 seconds ago      Running             coredns                   2                   95f6acafd9186
	bf8903d178802       edaa71f2aee88       13 seconds ago      Running             coredns                   2                   cfc96f346c06e
	feb64a0b1c756       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   cfc96f346c06e
	e84c020eeb1ef       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   95f6acafd9186
	99cd5cffce2f5       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   ca3069b48ab5d
	0da0b18bf25a3       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   e93c9a5a87702
	43c66cee08716       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   4360feb9d5b39
	e4004b0878ea0       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   e0d21bd6e7ca6
	4d35cfd047f9f       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   7d227e2a26bd3
	904c154b318d7       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   9093affbbd191
	
	
	==> coredns [26efc6c044a8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4021226671629365255.1696144947406215025. HINFO: read udp 10.244.0.3:46890->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4021226671629365255.1696144947406215025. HINFO: read udp 10.244.0.3:45416->10.0.2.3:53: i/o timeout
	
	
	==> coredns [bf8903d17880] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8988189209642025089.2378711284872810396. HINFO: read udp 10.244.0.2:33482->10.0.2.3:53: i/o timeout
	
	
	==> coredns [e84c020eeb1e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7690522343845307220.7067326986715166427. HINFO: read udp 10.244.0.3:58263->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7690522343845307220.7067326986715166427. HINFO: read udp 10.244.0.3:58799->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7690522343845307220.7067326986715166427. HINFO: read udp 10.244.0.3:40265->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7690522343845307220.7067326986715166427. HINFO: read udp 10.244.0.3:32823->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7690522343845307220.7067326986715166427. HINFO: read udp 10.244.0.3:48518->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7690522343845307220.7067326986715166427. HINFO: read udp 10.244.0.3:37468->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7690522343845307220.7067326986715166427. HINFO: read udp 10.244.0.3:46147->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7690522343845307220.7067326986715166427. HINFO: read udp 10.244.0.3:52115->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7690522343845307220.7067326986715166427. HINFO: read udp 10.244.0.3:37544->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7690522343845307220.7067326986715166427. HINFO: read udp 10.244.0.3:54002->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [feb64a0b1c75] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8704089521787398307.7438161811215024067. HINFO: read udp 10.244.0.2:47457->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8704089521787398307.7438161811215024067. HINFO: read udp 10.244.0.2:44108->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8704089521787398307.7438161811215024067. HINFO: read udp 10.244.0.2:34697->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8704089521787398307.7438161811215024067. HINFO: read udp 10.244.0.2:57684->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8704089521787398307.7438161811215024067. HINFO: read udp 10.244.0.2:45142->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8704089521787398307.7438161811215024067. HINFO: read udp 10.244.0.2:51547->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8704089521787398307.7438161811215024067. HINFO: read udp 10.244.0.2:32843->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8704089521787398307.7438161811215024067. HINFO: read udp 10.244.0.2:52978->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8704089521787398307.7438161811215024067. HINFO: read udp 10.244.0.2:34992->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8704089521787398307.7438161811215024067. HINFO: read udp 10.244.0.2:47894->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-707000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-707000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=running-upgrade-707000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_47_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 17:47:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-707000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 17:51:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 17:47:29 +0000   Mon, 16 Sep 2024 17:47:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 17:47:29 +0000   Mon, 16 Sep 2024 17:47:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 17:47:29 +0000   Mon, 16 Sep 2024 17:47:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 17:47:29 +0000   Mon, 16 Sep 2024 17:47:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-707000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f5cf03531e04c8ead68d7cc3bf0bcfd
	  System UUID:                6f5cf03531e04c8ead68d7cc3bf0bcfd
	  Boot ID:                    c3d203c0-8d7a-41ed-94b5-858072c37e21
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2rdrc                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-x2z4j                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-707000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-707000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-707000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-522z8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-707000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-707000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-707000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-707000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-707000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-707000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-707000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-707000 status is now: NodeReady
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-707000 event: Registered Node running-upgrade-707000 in Controller
	
	
	==> dmesg <==
	[  +1.811782] systemd-fstab-generator[831]: Ignoring "noauto" for root device
	[  +0.083512] systemd-fstab-generator[842]: Ignoring "noauto" for root device
	[  +0.078662] systemd-fstab-generator[853]: Ignoring "noauto" for root device
	[  +1.141303] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091084] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.079797] systemd-fstab-generator[1014]: Ignoring "noauto" for root device
	[  +2.443389] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[Sep16 17:43] systemd-fstab-generator[2058]: Ignoring "noauto" for root device
	[  +2.608931] systemd-fstab-generator[2340]: Ignoring "noauto" for root device
	[  +0.132472] systemd-fstab-generator[2378]: Ignoring "noauto" for root device
	[  +0.093847] systemd-fstab-generator[2389]: Ignoring "noauto" for root device
	[  +0.095353] systemd-fstab-generator[2402]: Ignoring "noauto" for root device
	[  +1.474704] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.140671] systemd-fstab-generator[2803]: Ignoring "noauto" for root device
	[  +0.079645] systemd-fstab-generator[2814]: Ignoring "noauto" for root device
	[  +0.081981] systemd-fstab-generator[2825]: Ignoring "noauto" for root device
	[  +0.077732] systemd-fstab-generator[2839]: Ignoring "noauto" for root device
	[  +2.441686] systemd-fstab-generator[2992]: Ignoring "noauto" for root device
	[  +2.828156] systemd-fstab-generator[3359]: Ignoring "noauto" for root device
	[  +1.450213] systemd-fstab-generator[3885]: Ignoring "noauto" for root device
	[ +19.242471] kauditd_printk_skb: 68 callbacks suppressed
	[Sep16 17:47] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.273626] systemd-fstab-generator[12083]: Ignoring "noauto" for root device
	[  +5.637648] systemd-fstab-generator[12677]: Ignoring "noauto" for root device
	[  +0.462362] systemd-fstab-generator[12810]: Ignoring "noauto" for root device
	
	
	==> etcd [4d35cfd047f9] <==
	{"level":"info","ts":"2024-09-16T17:47:25.199Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T17:47:25.199Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T17:47:25.201Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-16T17:47:25.202Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-16T17:47:25.202Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-16T17:47:25.204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-16T17:47:25.204Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-16T17:47:25.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T17:47:25.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T17:47:25.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-16T17:47:25.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T17:47:25.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-16T17:47:25.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T17:47:25.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-16T17:47:25.881Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-707000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T17:47:25.881Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T17:47:25.881Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T17:47:25.882Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-16T17:47:25.882Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T17:47:25.882Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T17:47:25.882Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T17:47:25.882Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T17:47:25.890Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T17:47:25.890Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T17:47:25.890Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 17:51:47 up 9 min,  0 users,  load average: 0.23, 0.34, 0.18
	Linux running-upgrade-707000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [43c66cee0871] <==
	I0916 17:47:27.121853       1 controller.go:611] quota admission added evaluator for: namespaces
	I0916 17:47:27.149040       1 cache.go:39] Caches are synced for autoregister controller
	I0916 17:47:27.149043       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0916 17:47:27.151238       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0916 17:47:27.151425       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0916 17:47:27.151532       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 17:47:27.168032       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0916 17:47:27.883221       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0916 17:47:28.060827       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 17:47:28.070278       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 17:47:28.070539       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 17:47:28.211247       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 17:47:28.224327       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 17:47:28.326835       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0916 17:47:28.328902       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0916 17:47:28.329226       1 controller.go:611] quota admission added evaluator for: endpoints
	I0916 17:47:28.330431       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 17:47:29.204895       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0916 17:47:29.838935       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0916 17:47:29.842037       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0916 17:47:29.848711       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0916 17:47:29.892121       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 17:47:43.463523       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0916 17:47:43.659637       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0916 17:47:44.189239       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [904c154b318d] <==
	I0916 17:47:43.473133       1 range_allocator.go:173] Starting range CIDR allocator
	I0916 17:47:43.473156       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0916 17:47:43.473181       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0916 17:47:43.475382       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-x2z4j"
	I0916 17:47:43.477218       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0916 17:47:43.480986       1 range_allocator.go:374] Set node running-upgrade-707000 PodCIDR to [10.244.0.0/24]
	I0916 17:47:43.481573       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2rdrc"
	I0916 17:47:43.521833       1 shared_informer.go:262] Caches are synced for HPA
	I0916 17:47:43.557554       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0916 17:47:43.557680       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0916 17:47:43.557715       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0916 17:47:43.557745       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0916 17:47:43.605632       1 shared_informer.go:262] Caches are synced for taint
	I0916 17:47:43.605689       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0916 17:47:43.605717       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-707000. Assuming now as a timestamp.
	I0916 17:47:43.605739       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0916 17:47:43.605886       1 event.go:294] "Event occurred" object="running-upgrade-707000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-707000 event: Registered Node running-upgrade-707000 in Controller"
	I0916 17:47:43.605929       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0916 17:47:43.654860       1 shared_informer.go:262] Caches are synced for daemon sets
	I0916 17:47:43.663439       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-522z8"
	I0916 17:47:43.676056       1 shared_informer.go:262] Caches are synced for resource quota
	I0916 17:47:43.711710       1 shared_informer.go:262] Caches are synced for resource quota
	I0916 17:47:44.093114       1 shared_informer.go:262] Caches are synced for garbage collector
	I0916 17:47:44.107156       1 shared_informer.go:262] Caches are synced for garbage collector
	I0916 17:47:44.107167       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [0da0b18bf25a] <==
	I0916 17:47:44.175250       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0916 17:47:44.175320       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0916 17:47:44.175341       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0916 17:47:44.185758       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0916 17:47:44.185770       1 server_others.go:206] "Using iptables Proxier"
	I0916 17:47:44.185783       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0916 17:47:44.185870       1 server.go:661] "Version info" version="v1.24.1"
	I0916 17:47:44.185874       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 17:47:44.186113       1 config.go:317] "Starting service config controller"
	I0916 17:47:44.186122       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0916 17:47:44.186130       1 config.go:226] "Starting endpoint slice config controller"
	I0916 17:47:44.186132       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0916 17:47:44.187154       1 config.go:444] "Starting node config controller"
	I0916 17:47:44.187159       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0916 17:47:44.286719       1 shared_informer.go:262] Caches are synced for service config
	I0916 17:47:44.286745       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0916 17:47:44.287308       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [e4004b0878ea] <==
	W0916 17:47:27.118884       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 17:47:27.118891       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0916 17:47:27.118907       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 17:47:27.118915       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0916 17:47:27.119128       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 17:47:27.119138       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 17:47:27.119198       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 17:47:27.119206       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0916 17:47:27.119227       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 17:47:27.119232       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0916 17:47:27.119260       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 17:47:27.119266       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 17:47:27.119281       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 17:47:27.119284       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0916 17:47:27.119654       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 17:47:27.119756       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0916 17:47:27.982252       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 17:47:27.982518       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0916 17:47:27.982259       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 17:47:27.982806       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0916 17:47:28.111369       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 17:47:28.111393       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0916 17:47:28.186347       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 17:47:28.186441       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0916 17:47:30.918801       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-09-16 17:42:33 UTC, ends at Mon 2024-09-16 17:51:47 UTC. --
	Sep 16 17:47:31 running-upgrade-707000 kubelet[12683]: I0916 17:47:31.092330   12683 reconciler.go:157] "Reconciler: start to sync state"
	Sep 16 17:47:31 running-upgrade-707000 kubelet[12683]: E0916 17:47:31.470807   12683 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-707000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-707000"
	Sep 16 17:47:31 running-upgrade-707000 kubelet[12683]: E0916 17:47:31.670702   12683 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-707000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-707000"
	Sep 16 17:47:31 running-upgrade-707000 kubelet[12683]: E0916 17:47:31.870568   12683 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-707000\" already exists" pod="kube-system/etcd-running-upgrade-707000"
	Sep 16 17:47:32 running-upgrade-707000 kubelet[12683]: I0916 17:47:32.067561   12683 request.go:601] Waited for 1.130956359s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 16 17:47:32 running-upgrade-707000 kubelet[12683]: E0916 17:47:32.069839   12683 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-707000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-707000"
	Sep 16 17:47:43 running-upgrade-707000 kubelet[12683]: I0916 17:47:43.579850   12683 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 17:47:43 running-upgrade-707000 kubelet[12683]: I0916 17:47:43.580243   12683 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 17:47:43 running-upgrade-707000 kubelet[12683]: I0916 17:47:43.612994   12683 topology_manager.go:200] "Topology Admit Handler"
	Sep 16 17:47:43 running-upgrade-707000 kubelet[12683]: I0916 17:47:43.665758   12683 topology_manager.go:200] "Topology Admit Handler"
	Sep 16 17:47:43 running-upgrade-707000 kubelet[12683]: I0916 17:47:43.681207   12683 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghkxz\" (UniqueName: \"kubernetes.io/projected/97c5c42a-a89c-4b35-9eb6-9670ad3f3477-kube-api-access-ghkxz\") pod \"storage-provisioner\" (UID: \"97c5c42a-a89c-4b35-9eb6-9670ad3f3477\") " pod="kube-system/storage-provisioner"
	Sep 16 17:47:43 running-upgrade-707000 kubelet[12683]: I0916 17:47:43.681294   12683 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a36f1cca-12b9-4f1d-9e75-57c0dce335ca-kube-proxy\") pod \"kube-proxy-522z8\" (UID: \"a36f1cca-12b9-4f1d-9e75-57c0dce335ca\") " pod="kube-system/kube-proxy-522z8"
	Sep 16 17:47:43 running-upgrade-707000 kubelet[12683]: I0916 17:47:43.681325   12683 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99ssg\" (UniqueName: \"kubernetes.io/projected/a36f1cca-12b9-4f1d-9e75-57c0dce335ca-kube-api-access-99ssg\") pod \"kube-proxy-522z8\" (UID: \"a36f1cca-12b9-4f1d-9e75-57c0dce335ca\") " pod="kube-system/kube-proxy-522z8"
	Sep 16 17:47:43 running-upgrade-707000 kubelet[12683]: I0916 17:47:43.681367   12683 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a36f1cca-12b9-4f1d-9e75-57c0dce335ca-lib-modules\") pod \"kube-proxy-522z8\" (UID: \"a36f1cca-12b9-4f1d-9e75-57c0dce335ca\") " pod="kube-system/kube-proxy-522z8"
	Sep 16 17:47:43 running-upgrade-707000 kubelet[12683]: I0916 17:47:43.681381   12683 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/97c5c42a-a89c-4b35-9eb6-9670ad3f3477-tmp\") pod \"storage-provisioner\" (UID: \"97c5c42a-a89c-4b35-9eb6-9670ad3f3477\") " pod="kube-system/storage-provisioner"
	Sep 16 17:47:43 running-upgrade-707000 kubelet[12683]: I0916 17:47:43.681390   12683 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a36f1cca-12b9-4f1d-9e75-57c0dce335ca-xtables-lock\") pod \"kube-proxy-522z8\" (UID: \"a36f1cca-12b9-4f1d-9e75-57c0dce335ca\") " pod="kube-system/kube-proxy-522z8"
	Sep 16 17:47:44 running-upgrade-707000 kubelet[12683]: I0916 17:47:44.078749   12683 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ca3069b48ab5d63c9c3a775b2492d38f68c5e8a96dbde13a1dfb19b368b9ea68"
	Sep 16 17:47:45 running-upgrade-707000 kubelet[12683]: I0916 17:47:45.232112   12683 topology_manager.go:200] "Topology Admit Handler"
	Sep 16 17:47:45 running-upgrade-707000 kubelet[12683]: I0916 17:47:45.233411   12683 topology_manager.go:200] "Topology Admit Handler"
	Sep 16 17:47:45 running-upgrade-707000 kubelet[12683]: I0916 17:47:45.293004   12683 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqx4j\" (UniqueName: \"kubernetes.io/projected/13fde8b8-1beb-47b1-8af4-07df994999f9-kube-api-access-pqx4j\") pod \"coredns-6d4b75cb6d-2rdrc\" (UID: \"13fde8b8-1beb-47b1-8af4-07df994999f9\") " pod="kube-system/coredns-6d4b75cb6d-2rdrc"
	Sep 16 17:47:45 running-upgrade-707000 kubelet[12683]: I0916 17:47:45.293030   12683 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/434bd0e0-c466-453e-ae44-e9d422ff80f3-config-volume\") pod \"coredns-6d4b75cb6d-x2z4j\" (UID: \"434bd0e0-c466-453e-ae44-e9d422ff80f3\") " pod="kube-system/coredns-6d4b75cb6d-x2z4j"
	Sep 16 17:47:45 running-upgrade-707000 kubelet[12683]: I0916 17:47:45.293042   12683 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltbrx\" (UniqueName: \"kubernetes.io/projected/434bd0e0-c466-453e-ae44-e9d422ff80f3-kube-api-access-ltbrx\") pod \"coredns-6d4b75cb6d-x2z4j\" (UID: \"434bd0e0-c466-453e-ae44-e9d422ff80f3\") " pod="kube-system/coredns-6d4b75cb6d-x2z4j"
	Sep 16 17:47:45 running-upgrade-707000 kubelet[12683]: I0916 17:47:45.293053   12683 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13fde8b8-1beb-47b1-8af4-07df994999f9-config-volume\") pod \"coredns-6d4b75cb6d-2rdrc\" (UID: \"13fde8b8-1beb-47b1-8af4-07df994999f9\") " pod="kube-system/coredns-6d4b75cb6d-2rdrc"
	Sep 16 17:51:33 running-upgrade-707000 kubelet[12683]: I0916 17:51:33.452840   12683 scope.go:110] "RemoveContainer" containerID="af22ba76198b3ac08b3d6d295a521e78e8ab74b65b4e886934edf08a7bff597e"
	Sep 16 17:51:33 running-upgrade-707000 kubelet[12683]: I0916 17:51:33.463329   12683 scope.go:110] "RemoveContainer" containerID="c1a6f8529ee6cd8c40f1b4b0eb1168208e67570614af26c9c2de07ffaf893d7a"
	
	
	==> storage-provisioner [99cd5cffce2f] <==
	I0916 17:47:44.152245       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 17:47:44.158993       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 17:47:44.159014       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 17:47:44.163756       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 17:47:44.164009       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c171dc4c-b435-4150-94a5-53a4b4e61197", APIVersion:"v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-707000_be0510dd-5e7a-4269-a5f5-eeba6274cfa6 became leader
	I0916 17:47:44.164417       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-707000_be0510dd-5e7a-4269-a5f5-eeba6274cfa6!
	I0916 17:47:44.264824       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-707000_be0510dd-5e7a-4269-a5f5-eeba6274cfa6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-707000 -n running-upgrade-707000
E0916 10:51:49.831571    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-707000 -n running-upgrade-707000: exit status 2 (15.676723625s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-707000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-707000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-707000
E0916 10:52:04.056124    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-707000: (1.407894917s)
--- FAIL: TestRunningBinaryUpgrade (598.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-153000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-153000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.892690875s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-153000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-153000" primary control-plane node in "kubernetes-upgrade-153000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-153000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:45:06.272112    4088 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:45:06.272237    4088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:06.272241    4088 out.go:358] Setting ErrFile to fd 2...
	I0916 10:45:06.272243    4088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:06.272389    4088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:45:06.273445    4088 out.go:352] Setting JSON to false
	I0916 10:45:06.289722    4088 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2670,"bootTime":1726506036,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:45:06.289792    4088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:45:06.296077    4088 out.go:177] * [kubernetes-upgrade-153000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:45:06.304232    4088 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:45:06.304268    4088 notify.go:220] Checking for updates...
	I0916 10:45:06.310178    4088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:45:06.313195    4088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:45:06.316130    4088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:45:06.319149    4088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:45:06.322206    4088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:45:06.325463    4088 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:45:06.325526    4088 config.go:182] Loaded profile config "running-upgrade-707000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:45:06.325578    4088 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:45:06.330208    4088 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:45:06.336190    4088 start.go:297] selected driver: qemu2
	I0916 10:45:06.336195    4088 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:45:06.336201    4088 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:45:06.338368    4088 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:45:06.341174    4088 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:45:06.344230    4088 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:45:06.344244    4088 cni.go:84] Creating CNI manager for ""
	I0916 10:45:06.344265    4088 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 10:45:06.344295    4088 start.go:340] cluster config:
	{Name:kubernetes-upgrade-153000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-153000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:45:06.347839    4088 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:45:06.355156    4088 out.go:177] * Starting "kubernetes-upgrade-153000" primary control-plane node in "kubernetes-upgrade-153000" cluster
	I0916 10:45:06.359191    4088 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:45:06.359205    4088 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 10:45:06.359215    4088 cache.go:56] Caching tarball of preloaded images
	I0916 10:45:06.359272    4088 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:45:06.359278    4088 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0916 10:45:06.359332    4088 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/kubernetes-upgrade-153000/config.json ...
	I0916 10:45:06.359343    4088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/kubernetes-upgrade-153000/config.json: {Name:mk118ced3d64321422229277e17ce8f554dd213f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:06.359731    4088 start.go:360] acquireMachinesLock for kubernetes-upgrade-153000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:45:06.359766    4088 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "kubernetes-upgrade-153000"
	I0916 10:45:06.359778    4088 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-153000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-153000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:45:06.359803    4088 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:45:06.363198    4088 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:45:06.379711    4088 start.go:159] libmachine.API.Create for "kubernetes-upgrade-153000" (driver="qemu2")
	I0916 10:45:06.379741    4088 client.go:168] LocalClient.Create starting
	I0916 10:45:06.379800    4088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:45:06.379830    4088 main.go:141] libmachine: Decoding PEM data...
	I0916 10:45:06.379842    4088 main.go:141] libmachine: Parsing certificate...
	I0916 10:45:06.379877    4088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:45:06.379901    4088 main.go:141] libmachine: Decoding PEM data...
	I0916 10:45:06.379910    4088 main.go:141] libmachine: Parsing certificate...
	I0916 10:45:06.380339    4088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:45:06.568311    4088 main.go:141] libmachine: Creating SSH key...
	I0916 10:45:06.607434    4088 main.go:141] libmachine: Creating Disk image...
	I0916 10:45:06.607441    4088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:45:06.607633    4088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2
	I0916 10:45:06.617070    4088 main.go:141] libmachine: STDOUT: 
	I0916 10:45:06.617087    4088 main.go:141] libmachine: STDERR: 
	I0916 10:45:06.617163    4088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2 +20000M
	I0916 10:45:06.625418    4088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:45:06.625434    4088 main.go:141] libmachine: STDERR: 
	I0916 10:45:06.625454    4088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2
	I0916 10:45:06.625460    4088 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:45:06.625473    4088 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:45:06.625510    4088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:f5:9b:08:2e:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2
	I0916 10:45:06.627167    4088 main.go:141] libmachine: STDOUT: 
	I0916 10:45:06.627181    4088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:45:06.627201    4088 client.go:171] duration metric: took 247.460792ms to LocalClient.Create
	I0916 10:45:08.627737    4088 start.go:128] duration metric: took 2.267981167s to createHost
	I0916 10:45:08.627752    4088 start.go:83] releasing machines lock for "kubernetes-upgrade-153000", held for 2.26802875s
	W0916 10:45:08.627769    4088 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:45:08.644276    4088 out.go:177] * Deleting "kubernetes-upgrade-153000" in qemu2 ...
	W0916 10:45:08.655845    4088 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:45:08.655853    4088 start.go:729] Will try again in 5 seconds ...
	I0916 10:45:13.656923    4088 start.go:360] acquireMachinesLock for kubernetes-upgrade-153000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:45:13.657246    4088 start.go:364] duration metric: took 223.333µs to acquireMachinesLock for "kubernetes-upgrade-153000"
	I0916 10:45:13.657318    4088 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-153000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-153000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:45:13.657477    4088 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:45:13.665008    4088 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:45:13.707138    4088 start.go:159] libmachine.API.Create for "kubernetes-upgrade-153000" (driver="qemu2")
	I0916 10:45:13.707190    4088 client.go:168] LocalClient.Create starting
	I0916 10:45:13.707314    4088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:45:13.707389    4088 main.go:141] libmachine: Decoding PEM data...
	I0916 10:45:13.707407    4088 main.go:141] libmachine: Parsing certificate...
	I0916 10:45:13.707469    4088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:45:13.707532    4088 main.go:141] libmachine: Decoding PEM data...
	I0916 10:45:13.707544    4088 main.go:141] libmachine: Parsing certificate...
	I0916 10:45:13.708156    4088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:45:13.887660    4088 main.go:141] libmachine: Creating SSH key...
	I0916 10:45:14.071080    4088 main.go:141] libmachine: Creating Disk image...
	I0916 10:45:14.071091    4088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:45:14.071316    4088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2
	I0916 10:45:14.080957    4088 main.go:141] libmachine: STDOUT: 
	I0916 10:45:14.080976    4088 main.go:141] libmachine: STDERR: 
	I0916 10:45:14.081051    4088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2 +20000M
	I0916 10:45:14.089006    4088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:45:14.089022    4088 main.go:141] libmachine: STDERR: 
	I0916 10:45:14.089043    4088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2
	I0916 10:45:14.089048    4088 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:45:14.089064    4088 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:45:14.089094    4088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:9e:93:e8:07:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2
	I0916 10:45:14.090744    4088 main.go:141] libmachine: STDOUT: 
	I0916 10:45:14.090768    4088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:45:14.090780    4088 client.go:171] duration metric: took 383.592666ms to LocalClient.Create
	I0916 10:45:16.092919    4088 start.go:128] duration metric: took 2.435466667s to createHost
	I0916 10:45:16.092981    4088 start.go:83] releasing machines lock for "kubernetes-upgrade-153000", held for 2.435778375s
	W0916 10:45:16.093313    4088 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-153000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-153000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:45:16.105825    4088 out.go:201] 
	W0916 10:45:16.111052    4088 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:45:16.111137    4088 out.go:270] * 
	* 
	W0916 10:45:16.113829    4088 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:45:16.122923    4088 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-153000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-153000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-153000: (3.18358025s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-153000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-153000 status --format={{.Host}}: exit status 7 (65.208875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-153000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-153000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.173418417s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-153000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-153000" primary control-plane node in "kubernetes-upgrade-153000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-153000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-153000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:45:19.414553    4125 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:45:19.414694    4125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:19.414697    4125 out.go:358] Setting ErrFile to fd 2...
	I0916 10:45:19.414699    4125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:45:19.414839    4125 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:45:19.415846    4125 out.go:352] Setting JSON to false
	I0916 10:45:19.432037    4125 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2683,"bootTime":1726506036,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:45:19.432097    4125 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:45:19.436498    4125 out.go:177] * [kubernetes-upgrade-153000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:45:19.445437    4125 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:45:19.445481    4125 notify.go:220] Checking for updates...
	I0916 10:45:19.452412    4125 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:45:19.455419    4125 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:45:19.458363    4125 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:45:19.461391    4125 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:45:19.464412    4125 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:45:19.467640    4125 config.go:182] Loaded profile config "kubernetes-upgrade-153000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0916 10:45:19.467901    4125 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:45:19.472326    4125 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:45:19.479372    4125 start.go:297] selected driver: qemu2
	I0916 10:45:19.479378    4125 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-153000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-153000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:45:19.479452    4125 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:45:19.481797    4125 cni.go:84] Creating CNI manager for ""
	I0916 10:45:19.481837    4125 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:45:19.481858    4125 start.go:340] cluster config:
	{Name:kubernetes-upgrade-153000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-153000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:45:19.485310    4125 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:45:19.493309    4125 out.go:177] * Starting "kubernetes-upgrade-153000" primary control-plane node in "kubernetes-upgrade-153000" cluster
	I0916 10:45:19.497302    4125 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:45:19.497323    4125 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:45:19.497336    4125 cache.go:56] Caching tarball of preloaded images
	I0916 10:45:19.497410    4125 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:45:19.497416    4125 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:45:19.497470    4125 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/kubernetes-upgrade-153000/config.json ...
	I0916 10:45:19.498019    4125 start.go:360] acquireMachinesLock for kubernetes-upgrade-153000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:45:19.498049    4125 start.go:364] duration metric: took 23.292µs to acquireMachinesLock for "kubernetes-upgrade-153000"
	I0916 10:45:19.498058    4125 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:45:19.498064    4125 fix.go:54] fixHost starting: 
	I0916 10:45:19.498186    4125 fix.go:112] recreateIfNeeded on kubernetes-upgrade-153000: state=Stopped err=<nil>
	W0916 10:45:19.498196    4125 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:45:19.505360    4125 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-153000" ...
	I0916 10:45:19.509279    4125 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:45:19.509315    4125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:9e:93:e8:07:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2
	I0916 10:45:19.511318    4125 main.go:141] libmachine: STDOUT: 
	I0916 10:45:19.511333    4125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:45:19.511363    4125 fix.go:56] duration metric: took 13.315416ms for fixHost
	I0916 10:45:19.511368    4125 start.go:83] releasing machines lock for "kubernetes-upgrade-153000", held for 13.331125ms
	W0916 10:45:19.511373    4125 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:45:19.511410    4125 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:45:19.511415    4125 start.go:729] Will try again in 5 seconds ...
	I0916 10:45:24.508107    4125 start.go:360] acquireMachinesLock for kubernetes-upgrade-153000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:45:24.508294    4125 start.go:364] duration metric: took 156.291µs to acquireMachinesLock for "kubernetes-upgrade-153000"
	I0916 10:45:24.508322    4125 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:45:24.508326    4125 fix.go:54] fixHost starting: 
	I0916 10:45:24.508482    4125 fix.go:112] recreateIfNeeded on kubernetes-upgrade-153000: state=Stopped err=<nil>
	W0916 10:45:24.508491    4125 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:45:24.511746    4125 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-153000" ...
	I0916 10:45:24.519529    4125 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:45:24.519581    4125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:9e:93:e8:07:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubernetes-upgrade-153000/disk.qcow2
	I0916 10:45:24.521629    4125 main.go:141] libmachine: STDOUT: 
	I0916 10:45:24.521725    4125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:45:24.521745    4125 fix.go:56] duration metric: took 13.430959ms for fixHost
	I0916 10:45:24.521749    4125 start.go:83] releasing machines lock for "kubernetes-upgrade-153000", held for 13.455959ms
	W0916 10:45:24.521793    4125 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-153000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-153000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:45:24.528649    4125 out.go:201] 
	W0916 10:45:24.532545    4125 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:45:24.532550    4125 out.go:270] * 
	* 
	W0916 10:45:24.532986    4125 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:45:24.542617    4125 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-153000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-153000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-153000 version --output=json: exit status 1 (29.723667ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-153000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-16 10:45:24.582644 -0700 PDT m=+2462.743263751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-153000 -n kubernetes-upgrade-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-153000 -n kubernetes-upgrade-153000: exit status 7 (29.995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-153000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-153000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-153000
--- FAIL: TestKubernetesUpgrade (18.46s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.59s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19649
- KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3974344061/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.59s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.14s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19649
- KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current898266889/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.999394366 start -p stopped-upgrade-385000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.999394366 start -p stopped-upgrade-385000 --memory=2200 --vm-driver=qemu2 : (39.637572208s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.999394366 -p stopped-upgrade-385000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.999394366 -p stopped-upgrade-385000 stop: (12.122198333s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-385000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0916 10:46:49.840376    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:04.062030    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-385000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.391650708s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-385000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-385000" primary control-plane node in "stopped-upgrade-385000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-385000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:46:17.464670    4163 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:46:17.464824    4163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:46:17.464829    4163 out.go:358] Setting ErrFile to fd 2...
	I0916 10:46:17.464833    4163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:46:17.464994    4163 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:46:17.466214    4163 out.go:352] Setting JSON to false
	I0916 10:46:17.486101    4163 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2741,"bootTime":1726506036,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:46:17.486177    4163 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:46:17.490710    4163 out.go:177] * [stopped-upgrade-385000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:46:17.498854    4163 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:46:17.498919    4163 notify.go:220] Checking for updates...
	I0916 10:46:17.505818    4163 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:46:17.508710    4163 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:46:17.511798    4163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:46:17.514827    4163 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:46:17.516127    4163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:46:17.519145    4163 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:46:17.522802    4163 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 10:46:17.525802    4163 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:46:17.529790    4163 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:46:17.536810    4163 start.go:297] selected driver: qemu2
	I0916 10:46:17.536817    4163 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 10:46:17.536897    4163 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:46:17.539882    4163 cni.go:84] Creating CNI manager for ""
	I0916 10:46:17.539915    4163 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:46:17.539943    4163 start.go:340] cluster config:
	{Name:stopped-upgrade-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 10:46:17.540005    4163 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:46:17.547822    4163 out.go:177] * Starting "stopped-upgrade-385000" primary control-plane node in "stopped-upgrade-385000" cluster
	I0916 10:46:17.551705    4163 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0916 10:46:17.551726    4163 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0916 10:46:17.551739    4163 cache.go:56] Caching tarball of preloaded images
	I0916 10:46:17.551804    4163 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:46:17.551810    4163 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0916 10:46:17.551857    4163 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/config.json ...
	I0916 10:46:17.552223    4163 start.go:360] acquireMachinesLock for stopped-upgrade-385000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:46:17.552251    4163 start.go:364] duration metric: took 22.167µs to acquireMachinesLock for "stopped-upgrade-385000"
	I0916 10:46:17.552259    4163 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:46:17.552265    4163 fix.go:54] fixHost starting: 
	I0916 10:46:17.552374    4163 fix.go:112] recreateIfNeeded on stopped-upgrade-385000: state=Stopped err=<nil>
	W0916 10:46:17.552382    4163 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:46:17.560784    4163 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-385000" ...
	I0916 10:46:17.564749    4163 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:46:17.564823    4163 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50487-:22,hostfwd=tcp::50488-:2376,hostname=stopped-upgrade-385000 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/disk.qcow2
	I0916 10:46:17.614309    4163 main.go:141] libmachine: STDOUT: 
	I0916 10:46:17.614335    4163 main.go:141] libmachine: STDERR: 
	I0916 10:46:17.614341    4163 main.go:141] libmachine: Waiting for VM to start (ssh -p 50487 docker@127.0.0.1)...
	I0916 10:46:37.691375    4163 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/config.json ...
	I0916 10:46:37.691740    4163 machine.go:93] provisionDockerMachine start ...
	I0916 10:46:37.691830    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:37.692035    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:37.692046    4163 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:46:37.768242    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0916 10:46:37.768262    4163 buildroot.go:166] provisioning hostname "stopped-upgrade-385000"
	I0916 10:46:37.768364    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:37.768551    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:37.768564    4163 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-385000 && echo "stopped-upgrade-385000" | sudo tee /etc/hostname
	I0916 10:46:37.845784    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-385000
	
	I0916 10:46:37.845854    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:37.845983    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:37.845994    4163 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-385000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-385000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-385000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:46:37.915269    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:46:37.915282    4163 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19649-964/.minikube CaCertPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19649-964/.minikube}
	I0916 10:46:37.915291    4163 buildroot.go:174] setting up certificates
	I0916 10:46:37.915300    4163 provision.go:84] configureAuth start
	I0916 10:46:37.915306    4163 provision.go:143] copyHostCerts
	I0916 10:46:37.915380    4163 exec_runner.go:144] found /Users/jenkins/minikube-integration/19649-964/.minikube/ca.pem, removing ...
	I0916 10:46:37.915387    4163 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19649-964/.minikube/ca.pem
	I0916 10:46:37.915492    4163 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19649-964/.minikube/ca.pem (1082 bytes)
	I0916 10:46:37.915686    4163 exec_runner.go:144] found /Users/jenkins/minikube-integration/19649-964/.minikube/cert.pem, removing ...
	I0916 10:46:37.915690    4163 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19649-964/.minikube/cert.pem
	I0916 10:46:37.915732    4163 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19649-964/.minikube/cert.pem (1123 bytes)
	I0916 10:46:37.915837    4163 exec_runner.go:144] found /Users/jenkins/minikube-integration/19649-964/.minikube/key.pem, removing ...
	I0916 10:46:37.915840    4163 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19649-964/.minikube/key.pem
	I0916 10:46:37.915887    4163 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19649-964/.minikube/key.pem (1679 bytes)
	I0916 10:46:37.916002    4163 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19649-964/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-385000 san=[127.0.0.1 localhost minikube stopped-upgrade-385000]
	I0916 10:46:38.056167    4163 provision.go:177] copyRemoteCerts
	I0916 10:46:38.056216    4163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:46:38.056226    4163 sshutil.go:53] new ssh client: &{IP:localhost Port:50487 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/id_rsa Username:docker}
	I0916 10:46:38.093208    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0916 10:46:38.099983    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:46:38.106598    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 10:46:38.114089    4163 provision.go:87] duration metric: took 198.787667ms to configureAuth
	I0916 10:46:38.114102    4163 buildroot.go:189] setting minikube options for container-runtime
	I0916 10:46:38.114212    4163 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:46:38.114252    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:38.114337    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:38.114344    4163 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 10:46:38.184111    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 10:46:38.184125    4163 buildroot.go:70] root file system type: tmpfs
	I0916 10:46:38.184175    4163 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 10:46:38.184224    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:38.184332    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:38.184373    4163 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 10:46:38.259051    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 10:46:38.259124    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:38.259245    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:38.259254    4163 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 10:46:38.622653    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 10:46:38.622668    4163 machine.go:96] duration metric: took 930.955333ms to provisionDockerMachine
	I0916 10:46:38.622674    4163 start.go:293] postStartSetup for "stopped-upgrade-385000" (driver="qemu2")
	I0916 10:46:38.622681    4163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:46:38.622757    4163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:46:38.622769    4163 sshutil.go:53] new ssh client: &{IP:localhost Port:50487 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/id_rsa Username:docker}
	I0916 10:46:38.658258    4163 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:46:38.659627    4163 info.go:137] Remote host: Buildroot 2021.02.12
	I0916 10:46:38.659634    4163 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19649-964/.minikube/addons for local assets ...
	I0916 10:46:38.659704    4163 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19649-964/.minikube/files for local assets ...
	I0916 10:46:38.659817    4163 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19649-964/.minikube/files/etc/ssl/certs/14512.pem -> 14512.pem in /etc/ssl/certs
	I0916 10:46:38.659917    4163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:46:38.662443    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/files/etc/ssl/certs/14512.pem --> /etc/ssl/certs/14512.pem (1708 bytes)
	I0916 10:46:38.670180    4163 start.go:296] duration metric: took 47.50175ms for postStartSetup
	I0916 10:46:38.670195    4163 fix.go:56] duration metric: took 21.118897459s for fixHost
	I0916 10:46:38.670234    4163 main.go:141] libmachine: Using SSH client type: native
	I0916 10:46:38.670338    4163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102db1190] 0x102db39d0 <nil>  [] 0s} localhost 50487 <nil> <nil>}
	I0916 10:46:38.670344    4163 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 10:46:38.735096    4163 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726508798.356194129
	
	I0916 10:46:38.735104    4163 fix.go:216] guest clock: 1726508798.356194129
	I0916 10:46:38.735108    4163 fix.go:229] Guest: 2024-09-16 10:46:38.356194129 -0700 PDT Remote: 2024-09-16 10:46:38.670197 -0700 PDT m=+21.238914418 (delta=-314.002871ms)
	I0916 10:46:38.735124    4163 fix.go:200] guest clock delta is within tolerance: -314.002871ms
	I0916 10:46:38.735126    4163 start.go:83] releasing machines lock for "stopped-upgrade-385000", held for 21.183839209s
	I0916 10:46:38.735198    4163 ssh_runner.go:195] Run: cat /version.json
	I0916 10:46:38.735211    4163 sshutil.go:53] new ssh client: &{IP:localhost Port:50487 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/id_rsa Username:docker}
	I0916 10:46:38.735382    4163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:46:38.735401    4163 sshutil.go:53] new ssh client: &{IP:localhost Port:50487 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/id_rsa Username:docker}
	W0916 10:46:38.735828    4163 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50487: connect: connection refused
	I0916 10:46:38.735848    4163 retry.go:31] will retry after 267.174656ms: dial tcp [::1]:50487: connect: connection refused
	W0916 10:46:38.768150    4163 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0916 10:46:38.768200    4163 ssh_runner.go:195] Run: systemctl --version
	I0916 10:46:38.769879    4163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 10:46:38.771542    4163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 10:46:38.771574    4163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0916 10:46:38.774285    4163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0916 10:46:38.779147    4163 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 10:46:38.779178    4163 start.go:495] detecting cgroup driver to use...
	I0916 10:46:38.779256    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:46:38.786444    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0916 10:46:38.790046    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:46:38.793417    4163 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:46:38.793449    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:46:38.796405    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:46:38.799249    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:46:38.802501    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:46:38.805828    4163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:46:38.808892    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:46:38.811661    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:46:38.814586    4163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:46:38.817900    4163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:46:38.820687    4163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:46:38.823210    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:38.903066    4163 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:46:38.909003    4163 start.go:495] detecting cgroup driver to use...
	I0916 10:46:38.909060    4163 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 10:46:38.915442    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:46:38.921091    4163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 10:46:38.927254    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 10:46:38.932373    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:46:38.936787    4163 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:46:38.985941    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:46:38.991040    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:46:38.997883    4163 ssh_runner.go:195] Run: which cri-dockerd
	I0916 10:46:38.999275    4163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 10:46:39.002249    4163 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0916 10:46:39.007320    4163 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 10:46:39.091226    4163 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 10:46:39.168285    4163 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 10:46:39.168360    4163 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 10:46:39.173721    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:39.254281    4163 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 10:46:40.408997    4163 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15474225s)
	I0916 10:46:40.409068    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 10:46:40.413821    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:46:40.417940    4163 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 10:46:40.497679    4163 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 10:46:40.577714    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:40.658194    4163 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 10:46:40.664199    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 10:46:40.668313    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:40.746541    4163 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 10:46:40.783820    4163 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 10:46:40.783915    4163 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 10:46:40.786440    4163 start.go:563] Will wait 60s for crictl version
	I0916 10:46:40.786493    4163 ssh_runner.go:195] Run: which crictl
	I0916 10:46:40.787969    4163 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:46:40.802441    4163 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0916 10:46:40.802530    4163 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 10:46:40.818561    4163 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 10:46:40.837806    4163 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0916 10:46:40.837892    4163 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0916 10:46:40.839195    4163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:46:40.842812    4163 kubeadm.go:883] updating cluster {Name:stopped-upgrade-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0916 10:46:40.842867    4163 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0916 10:46:40.842919    4163 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 10:46:40.852864    4163 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 10:46:40.852878    4163 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0916 10:46:40.852932    4163 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 10:46:40.856050    4163 ssh_runner.go:195] Run: which lz4
	I0916 10:46:40.857408    4163 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 10:46:40.858847    4163 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 10:46:40.858858    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0916 10:46:41.756068    4163 docker.go:649] duration metric: took 898.733166ms to copy over tarball
	I0916 10:46:41.756140    4163 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 10:46:42.911591    4163 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.155478625s)
	I0916 10:46:42.911604    4163 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 10:46:42.927194    4163 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 10:46:42.930574    4163 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0916 10:46:42.936082    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:43.013483    4163 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 10:46:44.472774    4163 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.459325042s)
	I0916 10:46:44.472879    4163 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 10:46:44.485784    4163 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 10:46:44.485792    4163 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0916 10:46:44.485797    4163 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 10:46:44.489697    4163 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:46:44.492456    4163 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:46:44.495123    4163 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:46:44.495948    4163 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:46:44.497863    4163 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:46:44.498026    4163 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:46:44.499702    4163 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:46:44.499850    4163 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:46:44.500851    4163 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0916 10:46:44.501135    4163 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:46:44.502111    4163 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0916 10:46:44.502111    4163 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:46:44.503171    4163 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:46:44.503289    4163 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0916 10:46:44.504455    4163 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0916 10:46:44.505075    4163 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:46:44.935446    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:46:44.939782    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:46:44.946407    4163 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0916 10:46:44.946435    4163 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:46:44.946508    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0916 10:46:44.953955    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:46:44.960387    4163 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0916 10:46:44.960405    4163 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:46:44.960457    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0916 10:46:44.963724    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:46:44.970101    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0916 10:46:44.971019    4163 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0916 10:46:44.971036    4163 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:46:44.971101    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0916 10:46:44.975381    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0916 10:46:44.976382    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0916 10:46:44.986592    4163 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0916 10:46:44.986615    4163 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:46:44.986675    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0916 10:46:44.987602    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0916 10:46:44.992475    4163 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0916 10:46:44.992492    4163 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0916 10:46:44.992537    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0916 10:46:45.002865    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0916 10:46:45.006440    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0916 10:46:45.011271    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0916 10:46:45.011394    4163 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0916 10:46:45.016820    4163 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0916 10:46:45.016842    4163 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0916 10:46:45.016827    4163 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0916 10:46:45.016865    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0916 10:46:45.016896    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0916 10:46:45.023828    4163 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0916 10:46:45.023838    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0916 10:46:45.038751    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0916 10:46:45.052973    4163 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0916 10:46:45.055501    4163 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0916 10:46:45.055644    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:46:45.065140    4163 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0916 10:46:45.065165    4163 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:46:45.065224    4163 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0916 10:46:45.074996    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0916 10:46:45.075123    4163 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0916 10:46:45.076685    4163 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0916 10:46:45.076695    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0916 10:46:45.114968    4163 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0916 10:46:45.114980    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0916 10:46:45.151705    4163 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0916 10:46:45.340243    4163 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0916 10:46:45.340554    4163 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:46:45.358840    4163 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0916 10:46:45.358871    4163 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:46:45.358966    4163 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:46:45.376429    4163 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 10:46:45.376587    4163 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 10:46:45.378175    4163 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 10:46:45.378189    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0916 10:46:45.410812    4163 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 10:46:45.410826    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0916 10:46:45.642283    4163 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 10:46:45.642324    4163 cache_images.go:92] duration metric: took 1.156561458s to LoadCachedImages
	W0916 10:46:45.642370    4163 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0916 10:46:45.642376    4163 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0916 10:46:45.642425    4163 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-385000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:46:45.642507    4163 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 10:46:45.657216    4163 cni.go:84] Creating CNI manager for ""
	I0916 10:46:45.657228    4163 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:46:45.657233    4163 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:46:45.657242    4163 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-385000 NodeName:stopped-upgrade-385000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:46:45.657319    4163 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-385000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:46:45.657378    4163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0916 10:46:45.660472    4163 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:46:45.660505    4163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:46:45.662966    4163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0916 10:46:45.667962    4163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:46:45.672649    4163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0916 10:46:45.678346    4163 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0916 10:46:45.679646    4163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:46:45.683051    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:46:45.752711    4163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:46:45.758701    4163 certs.go:68] Setting up /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000 for IP: 10.0.2.15
	I0916 10:46:45.758712    4163 certs.go:194] generating shared ca certs ...
	I0916 10:46:45.758721    4163 certs.go:226] acquiring lock for ca certs: {Name:mk95bad6e61a22ab8ae5ec5f8cd43ca9ad7a3f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:46:45.758874    4163 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19649-964/.minikube/ca.key
	I0916 10:46:45.758911    4163 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.key
	I0916 10:46:45.758920    4163 certs.go:256] generating profile certs ...
	I0916 10:46:45.758978    4163 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/client.key
	I0916 10:46:45.758993    4163 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.key.a125086a
	I0916 10:46:45.759002    4163 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.crt.a125086a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0916 10:46:45.796891    4163 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.crt.a125086a ...
	I0916 10:46:45.796905    4163 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.crt.a125086a: {Name:mk7cf1853e70135d80fe55d14110a29e8f3c472c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:46:45.797703    4163 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.key.a125086a ...
	I0916 10:46:45.797708    4163 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.key.a125086a: {Name:mkfcacca423acee63f1eaba2b7a073b3c1e7f477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:46:45.797870    4163 certs.go:381] copying /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.crt.a125086a -> /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.crt
	I0916 10:46:45.798032    4163 certs.go:385] copying /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.key.a125086a -> /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.key
	I0916 10:46:45.798171    4163 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/proxy-client.key
	I0916 10:46:45.798304    4163 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/1451.pem (1338 bytes)
	W0916 10:46:45.798334    4163 certs.go:480] ignoring /Users/jenkins/minikube-integration/19649-964/.minikube/certs/1451_empty.pem, impossibly tiny 0 bytes
	I0916 10:46:45.798339    4163 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 10:46:45.798361    4163 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem (1082 bytes)
	I0916 10:46:45.798380    4163 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:46:45.798397    4163 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/certs/key.pem (1679 bytes)
	I0916 10:46:45.798661    4163 certs.go:484] found cert: /Users/jenkins/minikube-integration/19649-964/.minikube/files/etc/ssl/certs/14512.pem (1708 bytes)
	I0916 10:46:45.798990    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:46:45.807636    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:46:45.814563    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:46:45.822073    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 10:46:45.829320    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 10:46:45.836392    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:46:45.842960    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:46:45.849902    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:46:45.857361    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/certs/1451.pem --> /usr/share/ca-certificates/1451.pem (1338 bytes)
	I0916 10:46:45.863802    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/files/etc/ssl/certs/14512.pem --> /usr/share/ca-certificates/14512.pem (1708 bytes)
	I0916 10:46:45.870469    4163 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:46:45.877431    4163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:46:45.882827    4163 ssh_runner.go:195] Run: openssl version
	I0916 10:46:45.884720    4163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:46:45.887424    4163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:46:45.888825    4163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:46:45.888845    4163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:46:45.890483    4163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:46:45.893793    4163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1451.pem && ln -fs /usr/share/ca-certificates/1451.pem /etc/ssl/certs/1451.pem"
	I0916 10:46:45.897008    4163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1451.pem
	I0916 10:46:45.898470    4163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 17:19 /usr/share/ca-certificates/1451.pem
	I0916 10:46:45.898490    4163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1451.pem
	I0916 10:46:45.900174    4163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1451.pem /etc/ssl/certs/51391683.0"
	I0916 10:46:45.902833    4163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14512.pem && ln -fs /usr/share/ca-certificates/14512.pem /etc/ssl/certs/14512.pem"
	I0916 10:46:45.906325    4163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14512.pem
	I0916 10:46:45.907719    4163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 17:19 /usr/share/ca-certificates/14512.pem
	I0916 10:46:45.907740    4163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14512.pem
	I0916 10:46:45.909294    4163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14512.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:46:45.912741    4163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:46:45.914085    4163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:46:45.915980    4163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:46:45.917649    4163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:46:45.919572    4163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:46:45.921221    4163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:46:45.922971    4163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:46:45.924774    4163 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0916 10:46:45.924849    4163 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:46:45.935303    4163 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:46:45.938569    4163 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:46:45.938574    4163 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:46:45.938599    4163 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:46:45.941539    4163 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:46:45.941832    4163 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-385000" does not appear in /Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:46:45.941941    4163 kubeconfig.go:62] /Users/jenkins/minikube-integration/19649-964/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-385000" cluster setting kubeconfig missing "stopped-upgrade-385000" context setting]
	I0916 10:46:45.942128    4163 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/kubeconfig: {Name:mk3766c19461825f7de68cf1dc4ddceadf57e288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:46:45.942595    4163 kapi.go:59] client config for stopped-upgrade-385000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/client.key", CAFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104389800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:46:45.942912    4163 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:46:45.946038    4163 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-385000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0916 10:46:45.946046    4163 kubeadm.go:1160] stopping kube-system containers ...
	I0916 10:46:45.946095    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 10:46:45.956823    4163 docker.go:483] Stopping containers: [8d9f55826a97 8d4d0ab15021 0b4e9b314038 bc2f80890fd2 260c90f3d5ef 24a3271025cd 7c61046fb44a bd11f23a2766]
	I0916 10:46:45.956900    4163 ssh_runner.go:195] Run: docker stop 8d9f55826a97 8d4d0ab15021 0b4e9b314038 bc2f80890fd2 260c90f3d5ef 24a3271025cd 7c61046fb44a bd11f23a2766
	I0916 10:46:45.969747    4163 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 10:46:45.975984    4163 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:46:45.978756    4163 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:46:45.978761    4163 kubeadm.go:157] found existing configuration files:
	
	I0916 10:46:45.978787    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf
	I0916 10:46:45.981862    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:46:45.981888    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:46:45.984733    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf
	I0916 10:46:45.987066    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:46:45.987091    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:46:45.989955    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf
	I0916 10:46:45.993090    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:46:45.993115    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:46:45.995723    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf
	I0916 10:46:45.998355    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:46:45.998383    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:46:46.001360    4163 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:46:46.004149    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:46:46.027615    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:46:46.317453    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:46:46.453742    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:46:46.480738    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:46:46.499848    4163 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:46:46.499923    4163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:47.000909    4163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:47.501939    4163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:47.506170    4163 api_server.go:72] duration metric: took 1.006358125s to wait for apiserver process to appear ...
	I0916 10:46:47.506182    4163 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:46:47.506192    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:52.508076    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:52.508098    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:46:57.508217    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:46:57.508262    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:02.508566    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:02.508595    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:07.509157    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:07.509211    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:12.509949    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:12.510042    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:17.511130    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:17.511187    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:22.512606    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:22.512639    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:27.514403    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:27.514442    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:32.516530    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:32.516558    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:37.518605    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:37.518634    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:42.520745    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:42.520763    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:47.522797    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:47.523034    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:47:47.545393    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:47:47.545483    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:47:47.555907    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:47:47.555990    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:47:47.565978    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:47:47.566061    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:47:47.576820    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:47:47.576900    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:47:47.587811    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:47:47.587891    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:47:47.598483    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:47:47.598563    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:47:47.608416    4163 logs.go:276] 0 containers: []
	W0916 10:47:47.608432    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:47:47.608513    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:47:47.619128    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:47:47.619145    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:47:47.619151    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:47:47.631203    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:47:47.631213    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:47:47.647166    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:47:47.647177    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:47:47.672444    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:47:47.672451    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:47:47.712290    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:47:47.712299    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:47:47.791867    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:47:47.791878    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:47:47.806228    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:47:47.806240    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:47:47.820783    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:47:47.820794    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:47:47.863306    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:47:47.863325    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:47:47.889155    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:47:47.889168    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:47:47.904300    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:47:47.904318    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:47:47.916285    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:47:47.916297    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:47:47.920579    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:47:47.920586    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:47:47.931888    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:47:47.931901    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:47:47.946332    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:47:47.946346    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:47:47.958556    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:47:47.958568    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:47:47.975887    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:47:47.975898    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:47:50.492976    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:47:55.495080    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:47:55.495286    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:47:55.514924    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:47:55.515062    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:47:55.529123    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:47:55.529219    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:47:55.541968    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:47:55.542061    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:47:55.552823    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:47:55.552906    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:47:55.564420    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:47:55.564504    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:47:55.575316    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:47:55.575387    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:47:55.585519    4163 logs.go:276] 0 containers: []
	W0916 10:47:55.585533    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:47:55.585608    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:47:55.596020    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:47:55.596039    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:47:55.596044    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:47:55.635457    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:47:55.635467    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:47:55.654499    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:47:55.654509    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:47:55.666987    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:47:55.666997    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:47:55.679239    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:47:55.679248    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:47:55.690890    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:47:55.690901    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:47:55.727510    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:47:55.727519    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:47:55.731734    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:47:55.731747    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:47:55.745223    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:47:55.745231    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:47:55.759484    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:47:55.759495    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:47:55.771431    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:47:55.771441    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:47:55.785899    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:47:55.785909    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:47:55.819910    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:47:55.819919    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:47:55.833659    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:47:55.833669    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:47:55.845401    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:47:55.845411    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:47:55.857804    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:47:55.857816    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:47:55.875697    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:47:55.875708    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:47:58.401909    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:03.403619    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:03.404140    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:03.436538    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:03.436691    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:03.459421    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:03.459524    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:03.472172    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:03.472264    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:03.492553    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:03.492658    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:03.510013    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:03.510102    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:03.520135    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:03.520217    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:03.530563    4163 logs.go:276] 0 containers: []
	W0916 10:48:03.530573    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:03.530641    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:03.543234    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:03.543253    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:03.543259    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:03.555989    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:03.556000    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:03.573327    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:03.573337    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:03.585216    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:03.585227    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:03.589938    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:03.589947    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:03.630526    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:03.630539    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:03.645213    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:03.645224    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:03.656811    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:03.656823    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:03.671469    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:03.671481    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:03.687933    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:03.687943    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:03.699734    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:03.699745    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:03.712066    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:03.712076    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:03.749279    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:03.749289    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:03.784628    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:03.784643    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:03.800271    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:03.800282    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:03.812002    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:03.812015    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:03.826460    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:03.826469    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:06.352134    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:11.354222    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:11.354376    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:11.368426    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:11.368513    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:11.379636    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:11.379710    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:11.390030    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:11.390105    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:11.400840    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:11.400919    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:11.411618    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:11.411709    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:11.422013    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:11.422098    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:11.432163    4163 logs.go:276] 0 containers: []
	W0916 10:48:11.432173    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:11.432241    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:11.445987    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:11.446004    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:11.446010    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:11.482827    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:11.482835    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:11.504317    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:11.504328    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:11.524484    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:11.524494    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:11.546670    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:11.546680    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:11.562310    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:11.562319    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:11.573635    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:11.573649    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:11.585689    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:11.585703    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:11.590204    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:11.590210    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:11.632480    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:11.632494    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:11.649163    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:11.649174    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:11.688099    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:11.688115    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:11.704053    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:11.704062    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:11.715899    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:11.715910    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:11.729547    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:11.729561    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:11.746797    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:11.746814    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:11.759351    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:11.759365    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:14.284442    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:19.284790    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:19.284960    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:19.296646    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:19.296747    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:19.308745    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:19.308829    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:19.319345    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:19.319430    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:19.329960    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:19.330051    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:19.340417    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:19.340495    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:19.351156    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:19.351240    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:19.361568    4163 logs.go:276] 0 containers: []
	W0916 10:48:19.361592    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:19.361658    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:19.372401    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:19.372419    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:19.372425    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:19.383695    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:19.383706    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:19.398520    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:19.398530    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:19.410441    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:19.410453    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:19.414731    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:19.414738    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:19.429161    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:19.429173    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:19.440863    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:19.440877    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:19.464067    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:19.464073    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:19.501421    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:19.501432    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:19.514926    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:19.514940    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:19.529354    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:19.529364    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:19.547988    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:19.547998    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:19.583474    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:19.583490    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:19.621990    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:19.622005    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:19.633325    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:19.633336    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:19.651332    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:19.651347    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:19.666783    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:19.666792    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:22.179875    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:27.182102    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:27.182392    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:27.207042    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:27.207184    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:27.229271    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:27.229371    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:27.241940    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:27.242026    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:27.252979    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:27.253067    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:27.264478    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:27.264560    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:27.275380    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:27.275470    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:27.285861    4163 logs.go:276] 0 containers: []
	W0916 10:48:27.285872    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:27.285947    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:27.296372    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:27.296392    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:27.296398    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:27.335695    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:27.335704    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:27.346907    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:27.346918    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:27.358258    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:27.358274    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:27.372430    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:27.372440    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:27.383707    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:27.383719    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:27.388343    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:27.388352    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:27.422497    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:27.422508    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:27.436763    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:27.436775    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:27.453551    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:27.453561    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:27.465003    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:27.465013    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:27.479165    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:27.479174    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:27.516517    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:27.516530    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:27.530780    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:27.530791    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:27.556547    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:27.556557    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:27.568169    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:27.568181    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:27.587148    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:27.587163    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:30.098779    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:35.099858    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:35.100463    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:35.140199    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:35.140371    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:35.162441    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:35.162562    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:35.177342    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:35.177441    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:35.190498    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:35.190593    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:35.201251    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:35.201334    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:35.217247    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:35.217327    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:35.227030    4163 logs.go:276] 0 containers: []
	W0916 10:48:35.227048    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:35.227114    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:35.237661    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:35.237679    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:35.237685    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:35.241886    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:35.241892    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:35.253699    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:35.253713    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:35.268463    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:35.268473    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:35.280955    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:35.280971    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:35.320002    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:35.320018    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:35.334973    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:35.334984    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:35.352734    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:35.352744    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:35.375998    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:35.376007    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:35.387288    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:35.387300    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:35.421949    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:35.421962    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:35.436214    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:35.436228    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:35.453698    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:35.453711    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:35.465472    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:35.465483    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:35.489455    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:35.489465    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:35.527700    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:35.527713    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:35.541663    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:35.541680    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:38.055753    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:43.058231    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:43.058826    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:43.095324    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:43.095499    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:43.117874    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:43.118004    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:43.133215    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:43.133315    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:43.145813    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:43.145910    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:43.156293    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:43.156373    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:43.168990    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:43.169075    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:43.179451    4163 logs.go:276] 0 containers: []
	W0916 10:48:43.179462    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:43.179538    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:43.190038    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:43.190056    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:43.190061    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:43.202763    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:43.202773    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:43.242226    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:43.242236    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:43.246649    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:43.246657    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:43.261465    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:43.261475    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:43.273030    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:43.273043    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:43.289409    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:43.289423    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:43.300458    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:43.300469    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:43.312739    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:43.312750    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:43.349491    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:43.349507    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:43.387689    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:43.387699    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:43.409025    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:43.409038    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:43.420713    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:43.420726    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:43.445537    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:43.445545    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:43.459457    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:43.459467    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:43.474600    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:43.474611    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:43.493087    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:43.493101    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:46.012170    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:51.014167    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:51.014322    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:51.030212    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:51.030303    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:51.042472    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:51.042592    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:51.053233    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:51.053337    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:51.064083    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:51.064169    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:51.074825    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:51.074902    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:51.087484    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:51.087573    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:51.098024    4163 logs.go:276] 0 containers: []
	W0916 10:48:51.098038    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:51.098115    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:51.108459    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:51.108476    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:51.108484    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:51.147286    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:51.147296    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:51.166841    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:51.166854    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:51.181056    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:51.181067    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:51.196661    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:51.196675    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:51.200860    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:51.200870    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:51.215026    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:51.215040    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:51.226925    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:51.226935    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:51.263178    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:51.263188    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:51.287069    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:51.287080    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:51.324883    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:51.324897    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:51.335988    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:51.336002    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:51.347722    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:51.347733    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:48:51.363249    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:51.363260    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:51.386381    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:51.386395    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:51.398198    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:51.398210    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:51.409352    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:51.409364    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:53.921153    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:48:58.923233    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:48:58.923533    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:48:58.946274    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:48:58.946414    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:48:58.962065    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:48:58.962171    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:48:58.975224    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:48:58.975320    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:48:58.987182    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:48:58.987268    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:48:58.997815    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:48:58.997892    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:48:59.008166    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:48:59.008255    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:48:59.017934    4163 logs.go:276] 0 containers: []
	W0916 10:48:59.017945    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:48:59.018017    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:48:59.028106    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:48:59.028123    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:48:59.028129    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:48:59.041876    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:48:59.041886    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:48:59.053827    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:48:59.053838    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:48:59.073397    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:48:59.073407    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:48:59.097815    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:48:59.097822    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:48:59.109737    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:48:59.109747    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:48:59.147779    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:48:59.147791    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:48:59.161583    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:48:59.161594    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:48:59.172842    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:48:59.172853    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:48:59.184913    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:48:59.184928    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:48:59.196295    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:48:59.196306    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:48:59.210316    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:48:59.210326    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:48:59.221289    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:48:59.221299    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:48:59.258924    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:48:59.258933    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:48:59.263320    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:48:59.263330    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:48:59.300095    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:48:59.300111    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:48:59.321261    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:48:59.321272    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:01.839065    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:06.841103    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:06.841243    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:06.852664    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:06.852746    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:06.862999    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:06.863073    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:06.873259    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:06.873343    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:06.883787    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:06.883864    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:06.894154    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:06.894237    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:06.906168    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:06.906240    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:06.915864    4163 logs.go:276] 0 containers: []
	W0916 10:49:06.915878    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:06.915951    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:06.926302    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:06.926318    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:06.926333    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:06.930751    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:06.930758    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:06.944485    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:06.944495    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:06.956386    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:06.956396    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:06.967657    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:06.967668    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:06.982503    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:06.982517    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:06.994438    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:06.994449    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:07.006757    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:07.006767    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:07.050219    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:07.050231    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:07.068478    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:07.068487    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:07.079784    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:07.079794    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:07.098191    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:07.098202    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:07.135903    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:07.135916    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:07.147844    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:07.147855    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:07.185935    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:07.185943    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:07.203403    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:07.203415    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:07.218904    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:07.218917    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:09.745919    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:14.747040    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:14.747172    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:14.758252    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:14.758349    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:14.768706    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:14.768791    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:14.779156    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:14.779236    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:14.789610    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:14.789688    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:14.804663    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:14.804752    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:14.815375    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:14.815454    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:14.829628    4163 logs.go:276] 0 containers: []
	W0916 10:49:14.829642    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:14.829717    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:14.840611    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:14.840628    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:14.840635    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:14.875516    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:14.875526    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:14.888564    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:14.888575    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:14.900351    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:14.900360    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:14.915666    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:14.915677    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:14.933195    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:14.933204    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:14.945161    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:14.945172    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:14.983826    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:14.983833    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:14.988771    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:14.988778    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:15.000639    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:15.000653    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:15.015605    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:15.015619    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:15.026963    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:15.026976    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:15.063928    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:15.063938    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:15.079006    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:15.079020    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:15.096394    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:15.096406    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:15.120033    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:15.120046    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:15.134176    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:15.134190    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:17.650922    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:22.651261    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:22.651410    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:22.665608    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:22.665710    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:22.678090    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:22.678175    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:22.689017    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:22.689105    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:22.699851    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:22.699941    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:22.710030    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:22.710103    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:22.724914    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:22.724991    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:22.735108    4163 logs.go:276] 0 containers: []
	W0916 10:49:22.735118    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:22.735180    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:22.750291    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:22.750314    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:22.750319    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:22.761565    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:22.761577    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:22.765741    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:22.765748    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:22.777430    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:22.777441    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:22.795519    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:22.795528    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:22.809849    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:22.809858    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:22.823524    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:22.823533    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:22.838905    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:22.838915    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:22.854495    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:22.854507    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:22.877346    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:22.877355    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:22.888984    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:22.888995    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:22.923053    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:22.923064    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:22.961574    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:22.961585    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:22.977099    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:22.977109    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:22.991356    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:22.991369    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:23.030411    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:23.030424    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:23.043827    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:23.043837    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:25.557050    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:30.559030    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:30.559326    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:30.589461    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:30.589615    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:30.607352    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:30.607454    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:30.627401    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:30.627502    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:30.638711    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:30.638792    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:30.651240    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:30.651326    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:30.666218    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:30.666305    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:30.676849    4163 logs.go:276] 0 containers: []
	W0916 10:49:30.676859    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:30.676922    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:30.688363    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:30.688380    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:30.688385    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:30.725910    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:30.725919    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:30.737509    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:30.737520    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:30.751878    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:30.751888    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:30.769557    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:30.769568    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:30.780480    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:30.780493    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:30.793511    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:30.793522    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:30.808135    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:30.808145    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:30.832322    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:30.832335    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:30.836484    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:30.836490    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:30.874240    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:30.874252    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:30.885409    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:30.885421    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:30.922837    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:30.922848    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:30.937183    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:30.937194    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:30.948953    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:30.948964    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:30.972099    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:30.972110    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:30.991580    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:30.991590    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:33.507952    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:38.509348    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:38.509663    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:38.531243    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:38.531363    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:38.551301    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:38.551402    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:38.562773    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:38.562864    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:38.573438    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:38.573522    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:38.583823    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:38.583908    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:38.594466    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:38.594549    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:38.609214    4163 logs.go:276] 0 containers: []
	W0916 10:49:38.609229    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:38.609304    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:38.622995    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:38.623013    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:38.623019    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:38.634100    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:38.634111    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:38.677730    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:38.677743    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:38.724828    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:38.724843    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:38.738295    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:38.738309    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:38.753352    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:38.753362    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:38.767792    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:38.767801    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:38.806967    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:38.806976    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:38.811124    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:38.811131    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:38.822593    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:38.822602    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:38.835482    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:38.835493    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:38.847651    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:38.847662    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:38.858995    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:38.859007    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:38.883437    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:38.883444    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:38.897160    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:38.897169    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:38.913123    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:38.913134    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:38.931968    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:38.931982    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:41.451506    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:46.453741    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:46.453970    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:46.479508    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:46.479635    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:46.495457    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:46.495555    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:46.507433    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:46.507529    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:46.518607    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:46.518690    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:46.546205    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:46.546293    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:46.561704    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:46.561786    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:46.571655    4163 logs.go:276] 0 containers: []
	W0916 10:49:46.571669    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:46.571735    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:46.582299    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:46.582317    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:46.582323    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:46.619088    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:46.619098    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:46.632882    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:46.632893    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:46.647357    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:46.647367    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:46.662455    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:46.662465    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:46.675558    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:46.675570    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:46.679867    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:46.679873    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:46.719015    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:46.719054    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:46.737708    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:46.737719    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:46.772766    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:46.772783    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:46.791311    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:46.791325    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:46.808411    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:46.808425    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:46.822496    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:46.822511    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:46.833237    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:46.833249    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:46.856045    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:46.856053    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:46.867677    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:46.867690    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:46.879481    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:46.879491    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:49.393326    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:49:54.395693    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:49:54.396195    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:49:54.426335    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:49:54.426491    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:49:54.444733    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:49:54.444847    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:49:54.458588    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:49:54.458674    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:49:54.471682    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:49:54.471754    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:49:54.482769    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:49:54.482852    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:49:54.493113    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:49:54.493194    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:49:54.507187    4163 logs.go:276] 0 containers: []
	W0916 10:49:54.507199    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:49:54.507265    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:49:54.517192    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:49:54.517212    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:49:54.517217    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:49:54.555909    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:49:54.555922    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:49:54.561091    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:49:54.561098    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:49:54.584981    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:49:54.584991    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:49:54.599411    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:49:54.599422    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:49:54.623867    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:49:54.623876    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:49:54.639077    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:49:54.639091    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:49:54.681485    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:49:54.681497    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:49:54.693361    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:49:54.693374    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:49:54.705365    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:49:54.705378    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:49:54.740072    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:49:54.740092    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:49:54.763414    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:49:54.763428    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:49:54.787471    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:49:54.787483    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:49:54.799404    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:49:54.799417    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:49:54.811672    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:49:54.811684    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:49:54.826229    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:49:54.826239    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:49:54.838112    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:49:54.838122    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:49:57.357295    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:02.359526    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:02.359848    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:02.388091    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:50:02.388221    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:02.405551    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:50:02.405647    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:02.418879    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:50:02.418966    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:02.429613    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:50:02.429717    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:02.443886    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:50:02.443971    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:02.454709    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:50:02.454794    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:02.464967    4163 logs.go:276] 0 containers: []
	W0916 10:50:02.464980    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:02.465051    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:02.476276    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:50:02.476295    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:02.476301    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:02.500670    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:50:02.500679    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:02.512394    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:02.512408    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:02.554688    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:50:02.554702    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:50:02.568666    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:50:02.568680    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:50:02.583950    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:50:02.583963    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:50:02.595580    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:50:02.595590    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:50:02.611096    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:50:02.611105    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:50:02.626052    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:02.626066    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:02.663382    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:50:02.663395    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:50:02.702502    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:50:02.702514    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:50:02.714522    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:50:02.714535    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:50:02.728602    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:50:02.728616    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:50:02.746131    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:50:02.746142    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:50:02.758402    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:02.758412    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:02.762282    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:50:02.762288    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:50:02.782733    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:50:02.782745    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:50:05.305688    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:10.307697    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:10.307876    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:10.321711    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:50:10.321813    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:10.333379    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:50:10.333463    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:10.344032    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:50:10.344118    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:10.354526    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:50:10.354616    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:10.365080    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:50:10.365163    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:10.376535    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:50:10.376618    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:10.387107    4163 logs.go:276] 0 containers: []
	W0916 10:50:10.387120    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:10.387194    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:10.397746    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:50:10.397765    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:10.397771    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:10.402079    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:50:10.402085    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:50:10.415923    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:50:10.415933    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:50:10.427929    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:50:10.427940    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:50:10.443501    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:10.443515    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:10.483221    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:50:10.483231    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:50:10.494718    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:50:10.494729    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:50:10.513223    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:50:10.513237    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:50:10.534542    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:10.534555    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:10.559704    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:50:10.559714    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:50:10.573484    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:50:10.573499    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:50:10.585133    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:50:10.585148    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:50:10.601465    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:50:10.601478    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:50:10.613114    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:50:10.613127    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:50:10.624211    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:50:10.624223    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:10.636303    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:10.636319    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:10.670955    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:50:10.670970    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:50:13.220403    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:18.222393    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:18.222688    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:18.245733    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:50:18.245870    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:18.261907    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:50:18.262018    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:18.275218    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:50:18.275313    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:18.286257    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:50:18.286339    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:18.296968    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:50:18.297053    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:18.307312    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:50:18.307396    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:18.317713    4163 logs.go:276] 0 containers: []
	W0916 10:50:18.317725    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:18.317801    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:18.327849    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:50:18.327866    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:50:18.327872    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:50:18.339589    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:18.339600    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:18.379608    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:18.379619    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:18.415196    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:50:18.415213    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:50:18.457014    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:50:18.457025    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:50:18.471230    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:50:18.471245    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:50:18.486311    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:50:18.486325    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:50:18.497813    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:50:18.497826    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:50:18.511366    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:50:18.511379    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:50:18.526153    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:50:18.526167    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:50:18.548857    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:50:18.548868    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:50:18.560416    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:50:18.560429    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:18.571922    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:18.571935    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:18.595409    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:18.595415    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:18.600070    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:50:18.600076    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:50:18.614560    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:50:18.614575    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:50:18.629149    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:50:18.629164    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:50:21.143473    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:26.145710    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:26.146018    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:26.169837    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:50:26.169987    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:26.186545    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:50:26.186644    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:26.205400    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:50:26.205490    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:26.217976    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:50:26.218063    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:26.228293    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:50:26.228402    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:26.239180    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:50:26.239266    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:26.249587    4163 logs.go:276] 0 containers: []
	W0916 10:50:26.249598    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:26.249674    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:26.260743    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:50:26.260761    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:50:26.260766    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:50:26.272638    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:26.272652    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:26.295613    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:50:26.295620    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:26.307707    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:50:26.307722    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:50:26.319932    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:50:26.319944    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:50:26.331548    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:26.331561    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:26.335767    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:50:26.335774    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:50:26.374848    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:50:26.374857    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:50:26.386568    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:50:26.386582    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:50:26.408525    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:50:26.408538    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:50:26.423585    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:26.423599    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:26.462676    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:50:26.462685    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:50:26.480957    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:50:26.480970    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:50:26.495883    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:50:26.495898    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:50:26.507600    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:26.507609    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:26.543463    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:50:26.543476    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:50:26.558222    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:50:26.558237    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:50:29.075456    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:34.077557    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:34.077723    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:34.094797    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:50:34.094892    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:34.111549    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:50:34.111635    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:34.127803    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:50:34.127891    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:34.138844    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:50:34.138940    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:34.153426    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:50:34.153507    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:34.164179    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:50:34.164263    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:34.175040    4163 logs.go:276] 0 containers: []
	W0916 10:50:34.175051    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:34.175128    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:34.186288    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:50:34.186307    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:50:34.186313    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:50:34.197679    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:50:34.197691    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:50:34.209645    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:50:34.209654    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:50:34.226893    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:50:34.226902    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:50:34.240952    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:50:34.240968    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:50:34.252706    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:50:34.252718    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:50:34.264815    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:50:34.264827    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:50:34.279095    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:50:34.279106    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:50:34.317314    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:50:34.317324    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:50:34.332176    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:50:34.332186    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:50:34.349223    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:50:34.349237    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:50:34.365922    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:34.365932    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:34.404811    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:34.404820    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:34.408781    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:34.408790    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:34.442836    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:50:34.442848    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:50:34.457703    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:34.457714    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:34.479190    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:50:34.479198    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:36.993160    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:41.994860    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:41.995114    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:50:42.013471    4163 logs.go:276] 2 containers: [74d76eebdf5b bc2f80890fd2]
	I0916 10:50:42.013589    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:50:42.027320    4163 logs.go:276] 2 containers: [b69d633e855b 8d4d0ab15021]
	I0916 10:50:42.027406    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:50:42.039105    4163 logs.go:276] 1 containers: [48bd1a74f6a9]
	I0916 10:50:42.039186    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:50:42.049902    4163 logs.go:276] 2 containers: [ba8dfd742383 260c90f3d5ef]
	I0916 10:50:42.049981    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:50:42.060364    4163 logs.go:276] 1 containers: [58c755b4cced]
	I0916 10:50:42.060441    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:50:42.070804    4163 logs.go:276] 2 containers: [aad936fc9923 8d9f55826a97]
	I0916 10:50:42.070872    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:50:42.081141    4163 logs.go:276] 0 containers: []
	W0916 10:50:42.081154    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:50:42.081231    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:50:42.091635    4163 logs.go:276] 2 containers: [7404f8339424 1bdd650df553]
	I0916 10:50:42.091653    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:50:42.091661    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:50:42.096002    4163 logs.go:123] Gathering logs for kube-apiserver [74d76eebdf5b] ...
	I0916 10:50:42.096010    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74d76eebdf5b"
	I0916 10:50:42.112640    4163 logs.go:123] Gathering logs for etcd [b69d633e855b] ...
	I0916 10:50:42.112650    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b69d633e855b"
	I0916 10:50:42.129275    4163 logs.go:123] Gathering logs for kube-scheduler [260c90f3d5ef] ...
	I0916 10:50:42.129286    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 260c90f3d5ef"
	I0916 10:50:42.145006    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:50:42.145017    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:50:42.168325    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:50:42.168333    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:50:42.180244    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:50:42.180254    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:50:42.219411    4163 logs.go:123] Gathering logs for kube-controller-manager [8d9f55826a97] ...
	I0916 10:50:42.219421    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d9f55826a97"
	I0916 10:50:42.234107    4163 logs.go:123] Gathering logs for kube-apiserver [bc2f80890fd2] ...
	I0916 10:50:42.234118    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc2f80890fd2"
	I0916 10:50:42.280211    4163 logs.go:123] Gathering logs for kube-scheduler [ba8dfd742383] ...
	I0916 10:50:42.280222    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8dfd742383"
	I0916 10:50:42.292057    4163 logs.go:123] Gathering logs for kube-proxy [58c755b4cced] ...
	I0916 10:50:42.292068    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58c755b4cced"
	I0916 10:50:42.304565    4163 logs.go:123] Gathering logs for storage-provisioner [7404f8339424] ...
	I0916 10:50:42.304575    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7404f8339424"
	I0916 10:50:42.315937    4163 logs.go:123] Gathering logs for storage-provisioner [1bdd650df553] ...
	I0916 10:50:42.315950    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdd650df553"
	I0916 10:50:42.326725    4163 logs.go:123] Gathering logs for coredns [48bd1a74f6a9] ...
	I0916 10:50:42.326735    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48bd1a74f6a9"
	I0916 10:50:42.341176    4163 logs.go:123] Gathering logs for etcd [8d4d0ab15021] ...
	I0916 10:50:42.341188    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4d0ab15021"
	I0916 10:50:42.355370    4163 logs.go:123] Gathering logs for kube-controller-manager [aad936fc9923] ...
	I0916 10:50:42.355379    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aad936fc9923"
	I0916 10:50:42.373216    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:50:42.373226    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:50:44.915151    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:49.916041    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:50:49.916121    4163 kubeadm.go:597] duration metric: took 4m3.9849355s to restartPrimaryControlPlane
	W0916 10:50:49.916180    4163 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0916 10:50:49.916211    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0916 10:50:50.924647    4163 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.008454167s)
	I0916 10:50:50.924742    4163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:50:50.929755    4163 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:50:50.932466    4163 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:50:50.935225    4163 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:50:50.935232    4163 kubeadm.go:157] found existing configuration files:
	
	I0916 10:50:50.935259    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf
	I0916 10:50:50.938249    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:50:50.938275    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:50:50.940885    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf
	I0916 10:50:50.943419    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:50:50.943448    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:50:50.946519    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf
	I0916 10:50:50.949286    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:50:50.949311    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:50:50.951726    4163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf
	I0916 10:50:50.954533    4163 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:50:50.954552    4163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:50:50.957274    4163 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 10:50:50.975209    4163 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0916 10:50:50.975272    4163 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:50:51.022181    4163 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:50:51.022246    4163 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:50:51.022322    4163 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 10:50:51.072618    4163 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:50:51.078781    4163 out.go:235]   - Generating certificates and keys ...
	I0916 10:50:51.078816    4163 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:50:51.078849    4163 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:50:51.078899    4163 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0916 10:50:51.078932    4163 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0916 10:50:51.078969    4163 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0916 10:50:51.079003    4163 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0916 10:50:51.079038    4163 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0916 10:50:51.079064    4163 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0916 10:50:51.079108    4163 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0916 10:50:51.079152    4163 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0916 10:50:51.079173    4163 kubeadm.go:310] [certs] Using the existing "sa" key
	I0916 10:50:51.079202    4163 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:50:51.143726    4163 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:50:51.260328    4163 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:50:51.364328    4163 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:50:51.511064    4163 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:50:51.542659    4163 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:50:51.542709    4163 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:50:51.542731    4163 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:50:51.626895    4163 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:50:51.633100    4163 out.go:235]   - Booting up control plane ...
	I0916 10:50:51.633155    4163 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:50:51.633203    4163 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:50:51.633237    4163 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:50:51.633288    4163 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:50:51.633379    4163 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 10:50:56.132865    4163 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502296 seconds
	I0916 10:50:56.132994    4163 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:50:56.139814    4163 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:50:56.650478    4163 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:50:56.650594    4163 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-385000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:50:57.154428    4163 kubeadm.go:310] [bootstrap-token] Using token: j84bsm.6jms1j7q43m6h00p
	I0916 10:50:57.157640    4163 out.go:235]   - Configuring RBAC rules ...
	I0916 10:50:57.157693    4163 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:50:57.157731    4163 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:50:57.161211    4163 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:50:57.162028    4163 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:50:57.162809    4163 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:50:57.163582    4163 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:50:57.167042    4163 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:50:57.334185    4163 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:50:57.558563    4163 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:50:57.559023    4163 kubeadm.go:310] 
	I0916 10:50:57.559058    4163 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:50:57.559064    4163 kubeadm.go:310] 
	I0916 10:50:57.559114    4163 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:50:57.559118    4163 kubeadm.go:310] 
	I0916 10:50:57.559132    4163 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:50:57.559182    4163 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:50:57.559208    4163 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:50:57.559212    4163 kubeadm.go:310] 
	I0916 10:50:57.559243    4163 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:50:57.559248    4163 kubeadm.go:310] 
	I0916 10:50:57.559276    4163 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:50:57.559280    4163 kubeadm.go:310] 
	I0916 10:50:57.559309    4163 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:50:57.559348    4163 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:50:57.559401    4163 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:50:57.559405    4163 kubeadm.go:310] 
	I0916 10:50:57.559452    4163 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:50:57.559494    4163 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:50:57.559498    4163 kubeadm.go:310] 
	I0916 10:50:57.559545    4163 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j84bsm.6jms1j7q43m6h00p \
	I0916 10:50:57.559598    4163 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4cbf98c9db407bfd377513d8a979980a7165b5a1a5b1a669b5a690e8302fdda \
	I0916 10:50:57.559610    4163 kubeadm.go:310] 	--control-plane 
	I0916 10:50:57.559615    4163 kubeadm.go:310] 
	I0916 10:50:57.559664    4163 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:50:57.559668    4163 kubeadm.go:310] 
	I0916 10:50:57.559719    4163 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j84bsm.6jms1j7q43m6h00p \
	I0916 10:50:57.559777    4163 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f4cbf98c9db407bfd377513d8a979980a7165b5a1a5b1a669b5a690e8302fdda 
	I0916 10:50:57.559892    4163 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:50:57.559985    4163 cni.go:84] Creating CNI manager for ""
	I0916 10:50:57.560001    4163 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:50:57.570498    4163 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 10:50:57.574486    4163 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 10:50:57.577662    4163 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 10:50:57.582629    4163 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:50:57.582683    4163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:50:57.582696    4163 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-385000 minikube.k8s.io/updated_at=2024_09_16T10_50_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=stopped-upgrade-385000 minikube.k8s.io/primary=true
	I0916 10:50:57.621029    4163 kubeadm.go:1113] duration metric: took 38.388208ms to wait for elevateKubeSystemPrivileges
	I0916 10:50:57.621038    4163 ops.go:34] apiserver oom_adj: -16
	I0916 10:50:57.621079    4163 kubeadm.go:394] duration metric: took 4m11.703933083s to StartCluster
	I0916 10:50:57.621090    4163 settings.go:142] acquiring lock: {Name:mkcc144e0c413dd8611ee3ccbc8c08f02650f2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:50:57.621184    4163 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:50:57.621587    4163 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/kubeconfig: {Name:mk3766c19461825f7de68cf1dc4ddceadf57e288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:50:57.621799    4163 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:50:57.621809    4163 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:50:57.621845    4163 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-385000"
	I0916 10:50:57.621856    4163 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-385000"
	W0916 10:50:57.621859    4163 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:50:57.621859    4163 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-385000"
	I0916 10:50:57.621869    4163 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-385000"
	I0916 10:50:57.621872    4163 host.go:66] Checking if "stopped-upgrade-385000" exists ...
	I0916 10:50:57.621895    4163 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:50:57.622763    4163 kapi.go:59] client config for stopped-upgrade-385000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/profiles/stopped-upgrade-385000/client.key", CAFile:"/Users/jenkins/minikube-integration/19649-964/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104389800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:50:57.622897    4163 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-385000"
	W0916 10:50:57.622902    4163 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:50:57.622909    4163 host.go:66] Checking if "stopped-upgrade-385000" exists ...
	I0916 10:50:57.625813    4163 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:57.625819    4163 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:50:57.625825    4163 sshutil.go:53] new ssh client: &{IP:localhost Port:50487 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/id_rsa Username:docker}
	I0916 10:50:57.625523    4163 out.go:177] * Verifying Kubernetes components...
	I0916 10:50:57.633445    4163 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:50:57.637524    4163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:50:57.641442    4163 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:57.641448    4163 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:50:57.641455    4163 sshutil.go:53] new ssh client: &{IP:localhost Port:50487 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/stopped-upgrade-385000/id_rsa Username:docker}
	I0916 10:50:57.712025    4163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:50:57.717089    4163 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:50:57.717140    4163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:50:57.720927    4163 api_server.go:72] duration metric: took 99.120375ms to wait for apiserver process to appear ...
	I0916 10:50:57.720934    4163 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:50:57.720941    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:50:57.743536    4163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:50:57.785425    4163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:50:58.137440    4163 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:50:58.137453    4163 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:51:02.722870    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:02.722921    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:07.723095    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:07.723120    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:12.723663    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:12.723703    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:17.724177    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:17.724215    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:22.724861    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:22.724897    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:27.725894    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:27.725932    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0916 10:51:28.139279    4163 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0916 10:51:28.147867    4163 out.go:177] * Enabled addons: storage-provisioner
	I0916 10:51:28.155785    4163 addons.go:510] duration metric: took 30.534887167s for enable addons: enabled=[storage-provisioner]
	I0916 10:51:32.726118    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:32.726160    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:37.727455    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:37.727502    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:42.729097    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:42.729139    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:47.730640    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:47.730675    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:52.732748    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:52.732780    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:51:57.734076    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:51:57.734261    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:51:57.744944    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:51:57.745032    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:51:57.755111    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:51:57.755186    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:51:57.765271    4163 logs.go:276] 2 containers: [502b71507c91 b28b03a1a632]
	I0916 10:51:57.765358    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:51:57.775098    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:51:57.775170    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:51:57.785337    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:51:57.785414    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:51:57.795701    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:51:57.795780    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:51:57.805703    4163 logs.go:276] 0 containers: []
	W0916 10:51:57.805715    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:51:57.805794    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:51:57.816158    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:51:57.816176    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:51:57.816182    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:51:57.828024    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:51:57.828034    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:51:57.845787    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:51:57.845801    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:51:57.881972    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:51:57.881985    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:51:57.886731    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:51:57.886739    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:51:57.921252    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:51:57.921264    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:51:57.936651    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:51:57.936666    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:51:57.957824    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:51:57.957839    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:51:57.969070    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:51:57.969084    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:51:57.992531    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:51:57.992538    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:51:58.003711    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:51:58.003722    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:51:58.018455    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:51:58.018468    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:51:58.032412    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:51:58.032425    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:52:00.546037    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:52:05.548269    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:52:05.548478    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:52:05.566380    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:52:05.566494    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:52:05.582391    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:52:05.582475    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:52:05.593683    4163 logs.go:276] 2 containers: [502b71507c91 b28b03a1a632]
	I0916 10:52:05.593756    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:52:05.603972    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:52:05.604052    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:52:05.614206    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:52:05.614279    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:52:05.624711    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:52:05.624779    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:52:05.634337    4163 logs.go:276] 0 containers: []
	W0916 10:52:05.634349    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:52:05.634421    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:52:05.644411    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:52:05.644428    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:52:05.644434    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:52:05.655706    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:52:05.655721    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:52:05.689573    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:52:05.689583    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:52:05.693857    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:52:05.693865    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:52:05.708440    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:52:05.708450    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:52:05.732013    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:52:05.732023    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:52:05.743716    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:52:05.743726    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:52:05.762001    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:52:05.762013    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:52:05.774055    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:52:05.774065    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:52:05.811567    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:52:05.811582    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:52:05.825713    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:52:05.825725    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:52:05.837281    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:52:05.837293    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:52:05.848658    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:52:05.848673    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:52:08.375102    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:52:13.377642    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:52:13.378237    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:52:13.418744    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:52:13.418904    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:52:13.445563    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:52:13.445678    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:52:13.460070    4163 logs.go:276] 2 containers: [502b71507c91 b28b03a1a632]
	I0916 10:52:13.460144    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:52:13.472110    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:52:13.472195    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:52:13.482832    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:52:13.482917    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:52:13.493466    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:52:13.493549    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:52:13.503553    4163 logs.go:276] 0 containers: []
	W0916 10:52:13.503564    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:52:13.503635    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:52:13.513821    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:52:13.513833    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:52:13.513839    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:52:13.535147    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:52:13.535157    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:52:13.547080    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:52:13.547090    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:52:13.566637    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:52:13.566647    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:52:13.584411    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:52:13.584420    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:52:13.608909    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:52:13.608918    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:52:13.620633    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:52:13.620645    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:52:13.655075    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:52:13.655099    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:52:13.666685    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:52:13.666694    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:52:13.682726    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:52:13.682736    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:52:13.697040    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:52:13.697049    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:52:13.708574    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:52:13.708585    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:52:13.742287    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:52:13.742295    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:52:16.248873    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:52:21.251357    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:52:21.251882    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:52:21.286961    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:52:21.287127    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:52:21.306848    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:52:21.306962    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:52:21.321660    4163 logs.go:276] 2 containers: [502b71507c91 b28b03a1a632]
	I0916 10:52:21.321749    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:52:21.336359    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:52:21.336449    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:52:21.346883    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:52:21.346964    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:52:21.357547    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:52:21.357612    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:52:21.367989    4163 logs.go:276] 0 containers: []
	W0916 10:52:21.368001    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:52:21.368070    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:52:21.377684    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:52:21.377701    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:52:21.377707    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:52:21.382500    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:52:21.382511    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:52:21.417139    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:52:21.417149    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:52:21.439735    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:52:21.439747    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:52:21.451529    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:52:21.451538    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:52:21.466772    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:52:21.466782    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:52:21.484588    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:52:21.484597    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:52:21.520035    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:52:21.520042    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:52:21.534075    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:52:21.534086    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:52:21.545460    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:52:21.545471    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:52:21.561070    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:52:21.561080    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:52:21.578190    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:52:21.578199    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:52:21.602940    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:52:21.602948    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:52:24.116130    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:52:29.118887    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:52:29.119493    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:52:29.159524    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:52:29.159689    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:52:29.181808    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:52:29.181938    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:52:29.198612    4163 logs.go:276] 2 containers: [502b71507c91 b28b03a1a632]
	I0916 10:52:29.198709    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:52:29.211370    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:52:29.211455    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:52:29.222488    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:52:29.222573    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:52:29.232823    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:52:29.232907    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:52:29.242953    4163 logs.go:276] 0 containers: []
	W0916 10:52:29.242967    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:52:29.243026    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:52:29.254342    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:52:29.254356    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:52:29.254361    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:52:29.268707    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:52:29.268718    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:52:29.280419    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:52:29.280429    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:52:29.292045    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:52:29.292056    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:52:29.306913    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:52:29.306921    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:52:29.318858    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:52:29.318869    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:52:29.336031    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:52:29.336040    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:52:29.371373    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:52:29.371384    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:52:29.405019    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:52:29.405028    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:52:29.416184    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:52:29.416196    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:52:29.437034    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:52:29.437045    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:52:29.462678    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:52:29.462688    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:52:29.467019    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:52:29.467029    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:52:31.982752    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:52:36.985460    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:52:36.986059    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:52:37.030305    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:52:37.030475    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:52:37.052584    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:52:37.052698    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:52:37.066815    4163 logs.go:276] 2 containers: [502b71507c91 b28b03a1a632]
	I0916 10:52:37.066907    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:52:37.083909    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:52:37.084002    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:52:37.095116    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:52:37.095203    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:52:37.106181    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:52:37.106260    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:52:37.116480    4163 logs.go:276] 0 containers: []
	W0916 10:52:37.116490    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:52:37.116561    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:52:37.126879    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:52:37.126894    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:52:37.126899    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:52:37.141061    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:52:37.141070    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:52:37.156716    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:52:37.156729    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:52:37.168096    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:52:37.168106    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:52:37.185362    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:52:37.185372    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:52:37.196788    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:52:37.196802    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:52:37.208391    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:52:37.208411    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:52:37.222765    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:52:37.222801    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:52:37.226985    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:52:37.226993    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:52:37.259854    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:52:37.259870    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:52:37.275769    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:52:37.275779    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:52:37.286915    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:52:37.286924    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:52:37.311099    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:52:37.311107    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:52:39.845865    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:52:44.848519    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:52:44.849139    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:52:44.890833    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:52:44.891005    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:52:44.912086    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:52:44.912221    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:52:44.927202    4163 logs.go:276] 2 containers: [502b71507c91 b28b03a1a632]
	I0916 10:52:44.927292    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:52:44.939707    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:52:44.939791    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:52:44.950560    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:52:44.950645    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:52:44.961300    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:52:44.961387    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:52:44.971562    4163 logs.go:276] 0 containers: []
	W0916 10:52:44.971575    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:52:44.971651    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:52:44.982346    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:52:44.982362    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:52:44.982367    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:52:45.016450    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:52:45.016460    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:52:45.030295    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:52:45.030305    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:52:45.051407    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:52:45.051418    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:52:45.071230    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:52:45.071239    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:52:45.090060    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:52:45.090071    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:52:45.113075    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:52:45.113083    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:52:45.125504    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:52:45.125520    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:52:45.130173    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:52:45.130182    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:52:45.166920    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:52:45.166933    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:52:45.181223    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:52:45.181234    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:52:45.192556    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:52:45.192566    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:52:45.204026    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:52:45.204037    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:52:47.717173    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:52:52.719455    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:52:52.719755    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:52:52.744663    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:52:52.744805    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:52:52.761547    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:52:52.761647    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:52:52.774581    4163 logs.go:276] 2 containers: [502b71507c91 b28b03a1a632]
	I0916 10:52:52.774659    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:52:52.785851    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:52:52.785920    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:52:52.796246    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:52:52.796335    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:52:52.810575    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:52:52.810655    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:52:52.827549    4163 logs.go:276] 0 containers: []
	W0916 10:52:52.827560    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:52:52.827635    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:52:52.840755    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:52:52.840768    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:52:52.840773    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:52:52.855024    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:52:52.855038    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:52:52.868727    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:52:52.868737    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:52:52.880115    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:52:52.880125    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:52:52.895093    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:52:52.895102    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:52:52.906351    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:52:52.906362    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:52:52.929370    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:52:52.929378    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:52:52.933926    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:52:52.933935    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:52:52.967863    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:52:52.967877    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:52:52.979356    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:52:52.979367    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:52:52.996581    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:52:52.996592    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:52:53.008429    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:52:53.008441    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:52:53.042287    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:52:53.042294    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:52:55.555930    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:53:00.558568    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:53:00.559030    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:53:00.592985    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:53:00.593150    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:53:00.614265    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:53:00.614390    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:53:00.632527    4163 logs.go:276] 2 containers: [502b71507c91 b28b03a1a632]
	I0916 10:53:00.632622    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:53:00.645002    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:53:00.645089    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:53:00.655205    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:53:00.655279    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:53:00.666703    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:53:00.666791    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:53:00.676670    4163 logs.go:276] 0 containers: []
	W0916 10:53:00.676683    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:53:00.676757    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:53:00.686864    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:53:00.686879    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:53:00.686884    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:53:00.701530    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:53:00.701542    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:53:00.713143    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:53:00.713155    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:53:00.724511    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:53:00.724523    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:53:00.739262    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:53:00.739277    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:53:00.750413    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:53:00.750426    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:53:00.784200    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:53:00.784211    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:53:00.788548    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:53:00.788554    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:53:00.802329    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:53:00.802343    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:53:00.814456    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:53:00.814469    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:53:00.830892    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:53:00.830904    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:53:00.842332    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:53:00.842345    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:53:00.867072    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:53:00.867080    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:53:03.403825    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:53:08.406243    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:53:08.406572    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:53:08.480252    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:53:08.480354    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:53:08.508895    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:53:08.508980    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:53:08.542739    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:53:08.542834    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:53:08.569883    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:53:08.569977    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:53:08.595406    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:53:08.595486    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:53:08.611341    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:53:08.611413    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:53:08.621677    4163 logs.go:276] 0 containers: []
	W0916 10:53:08.621692    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:53:08.621770    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:53:08.632168    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:53:08.632184    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:53:08.632190    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:53:08.667033    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:53:08.667041    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:53:08.681376    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:53:08.681386    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:53:08.694397    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:53:08.694410    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:53:08.705119    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:53:08.705132    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:53:08.720858    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:53:08.720868    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:53:08.732209    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:53:08.732219    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:53:08.743741    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:53:08.743751    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:53:08.755173    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:53:08.755183    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:53:08.778663    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:53:08.778670    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:53:08.791172    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:53:08.791182    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:53:08.795629    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:53:08.795638    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:53:08.831351    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:53:08.831366    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:53:08.845289    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:53:08.845299    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:53:08.862150    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:53:08.862165    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:53:11.375626    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:53:16.376342    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:53:16.376478    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:53:16.390481    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:53:16.390565    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:53:16.401451    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:53:16.401528    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:53:16.412426    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:53:16.412509    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:53:16.426547    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:53:16.426624    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:53:16.437018    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:53:16.437093    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:53:16.447526    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:53:16.447597    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:53:16.457548    4163 logs.go:276] 0 containers: []
	W0916 10:53:16.457566    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:53:16.457638    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:53:16.470144    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:53:16.470167    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:53:16.470176    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:53:16.505401    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:53:16.505409    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:53:16.516364    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:53:16.516376    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:53:16.531296    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:53:16.531306    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:53:16.548525    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:53:16.548536    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:53:16.560945    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:53:16.560955    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:53:16.575229    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:53:16.575239    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:53:16.586592    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:53:16.586607    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:53:16.611943    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:53:16.611958    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:53:16.616018    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:53:16.616025    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:53:16.627436    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:53:16.627447    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:53:16.662654    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:53:16.662664    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:53:16.677148    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:53:16.677158    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:53:16.692680    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:53:16.692691    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:53:16.704077    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:53:16.704088    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:53:19.218172    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:53:24.218738    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:53:24.219156    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:53:24.256154    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:53:24.256301    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:53:24.277000    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:53:24.277109    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:53:24.290360    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:53:24.290436    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:53:24.300915    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:53:24.300979    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:53:24.319517    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:53:24.319602    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:53:24.330349    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:53:24.330426    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:53:24.340783    4163 logs.go:276] 0 containers: []
	W0916 10:53:24.340798    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:53:24.340866    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:53:24.351503    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:53:24.351520    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:53:24.351527    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:53:24.364281    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:53:24.364297    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:53:24.369080    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:53:24.369089    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:53:24.390798    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:53:24.390811    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:53:24.405129    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:53:24.405141    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:53:24.416885    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:53:24.416898    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:53:24.432234    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:53:24.432244    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:53:24.467824    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:53:24.467835    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:53:24.479220    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:53:24.479231    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:53:24.496500    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:53:24.496510    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:53:24.532344    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:53:24.532356    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:53:24.558994    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:53:24.559010    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:53:24.573804    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:53:24.573821    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:53:24.585741    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:53:24.585751    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:53:24.597847    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:53:24.597856    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:53:27.121908    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:53:32.124104    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:53:32.124665    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:53:32.164716    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:53:32.164872    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:53:32.187026    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:53:32.187165    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:53:32.202647    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:53:32.202741    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:53:32.215601    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:53:32.215686    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:53:32.227239    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:53:32.227312    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:53:32.237399    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:53:32.237481    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:53:32.248277    4163 logs.go:276] 0 containers: []
	W0916 10:53:32.248292    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:53:32.248356    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:53:32.258832    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:53:32.258852    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:53:32.258859    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:53:32.274675    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:53:32.274684    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:53:32.285873    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:53:32.285882    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:53:32.309847    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:53:32.309859    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:53:32.343240    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:53:32.343248    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:53:32.354312    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:53:32.354322    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:53:32.367988    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:53:32.367999    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:53:32.379521    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:53:32.379532    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:53:32.414025    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:53:32.414038    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:53:32.425626    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:53:32.425640    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:53:32.441243    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:53:32.441254    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:53:32.465897    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:53:32.465912    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:53:32.470158    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:53:32.470167    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:53:32.484494    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:53:32.484504    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:53:32.498571    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:53:32.498582    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:53:35.015671    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:53:40.018278    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:53:40.018375    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:53:40.030723    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:53:40.030806    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:53:40.043986    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:53:40.044044    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:53:40.055198    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:53:40.055276    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:53:40.067445    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:53:40.067519    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:53:40.077871    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:53:40.077954    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:53:40.088306    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:53:40.088378    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:53:40.098311    4163 logs.go:276] 0 containers: []
	W0916 10:53:40.098323    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:53:40.098372    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:53:40.109092    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:53:40.109111    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:53:40.109118    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:53:40.124253    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:53:40.124274    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:53:40.136157    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:53:40.136169    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:53:40.150475    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:53:40.150486    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:53:40.163241    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:53:40.163253    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:53:40.176044    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:53:40.176060    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:53:40.189644    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:53:40.189658    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:53:40.209144    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:53:40.209165    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:53:40.236121    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:53:40.236147    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:53:40.273748    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:53:40.273768    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:53:40.313214    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:53:40.313226    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:53:40.328288    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:53:40.328301    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:53:40.332752    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:53:40.332759    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:53:40.347499    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:53:40.347515    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:53:40.359708    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:53:40.359720    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:53:42.879168    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:53:47.881837    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:53:47.882023    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:53:47.901834    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:53:47.901923    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:53:47.914593    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:53:47.914665    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:53:47.930903    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:53:47.930990    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:53:47.941582    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:53:47.941668    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:53:47.951932    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:53:47.952013    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:53:47.961914    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:53:47.962000    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:53:47.972247    4163 logs.go:276] 0 containers: []
	W0916 10:53:47.972261    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:53:47.972338    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:53:47.982748    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:53:47.982764    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:53:47.982770    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:53:48.016307    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:53:48.016315    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:53:48.028518    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:53:48.028531    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:53:48.043678    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:53:48.043687    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:53:48.068636    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:53:48.068644    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:53:48.082405    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:53:48.082415    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:53:48.093737    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:53:48.093750    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:53:48.105745    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:53:48.105756    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:53:48.117035    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:53:48.117045    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:53:48.121533    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:53:48.121540    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:53:48.155222    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:53:48.155234    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:53:48.170207    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:53:48.170219    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:53:48.181974    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:53:48.181983    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:53:48.199937    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:53:48.199950    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:53:48.211345    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:53:48.211356    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:53:50.724650    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:53:55.726443    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:53:55.726611    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:53:55.738114    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:53:55.738201    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:53:55.747984    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:53:55.748059    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:53:55.758839    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:53:55.758923    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:53:55.769363    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:53:55.769436    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:53:55.780424    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:53:55.780493    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:53:55.796469    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:53:55.796558    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:53:55.806562    4163 logs.go:276] 0 containers: []
	W0916 10:53:55.806574    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:53:55.806640    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:53:55.821657    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:53:55.821676    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:53:55.821682    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:53:55.837144    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:53:55.837154    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:53:55.848403    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:53:55.848413    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:53:55.861587    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:53:55.861603    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:53:55.896532    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:53:55.896547    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:53:55.910556    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:53:55.910566    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:53:55.924691    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:53:55.924700    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:53:55.948998    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:53:55.949009    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:53:55.964322    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:53:55.964333    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:53:55.975621    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:53:55.975632    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:53:55.987152    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:53:55.987161    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:53:56.004205    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:53:56.004214    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:53:56.009048    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:53:56.009057    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:53:56.020683    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:53:56.020693    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:53:56.054844    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:53:56.054853    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:53:58.567383    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:54:03.569664    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:54:03.569741    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:54:03.582506    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:54:03.582574    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:54:03.594653    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:54:03.594732    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:54:03.610752    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:54:03.610818    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:54:03.622162    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:54:03.622243    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:54:03.634534    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:54:03.634594    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:54:03.646211    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:54:03.646282    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:54:03.656331    4163 logs.go:276] 0 containers: []
	W0916 10:54:03.656342    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:54:03.656403    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:54:03.667687    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:54:03.667706    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:54:03.667712    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:54:03.704730    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:54:03.704742    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:54:03.723961    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:54:03.723975    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:54:03.749436    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:54:03.749445    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:54:03.761880    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:54:03.761892    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:54:03.773964    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:54:03.773975    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:54:03.786841    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:54:03.786852    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:54:03.802669    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:54:03.802686    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:54:03.815427    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:54:03.815439    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:54:03.828858    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:54:03.828870    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:54:03.845081    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:54:03.845117    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:54:03.861012    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:54:03.861026    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:54:03.874818    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:54:03.874836    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:54:03.910904    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:54:03.910923    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:54:03.917210    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:54:03.917221    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:54:06.431376    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:54:11.434087    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:54:11.434620    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:54:11.475740    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:54:11.475922    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:54:11.495213    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:54:11.495321    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:54:11.508506    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:54:11.508597    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:54:11.519537    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:54:11.519621    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:54:11.529807    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:54:11.529892    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:54:11.546676    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:54:11.546761    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:54:11.558716    4163 logs.go:276] 0 containers: []
	W0916 10:54:11.558729    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:54:11.558802    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:54:11.568727    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:54:11.568746    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:54:11.568751    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:54:11.573194    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:54:11.573202    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:54:11.590634    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:54:11.590644    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:54:11.615394    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:54:11.615404    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:54:11.650908    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:54:11.650920    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:54:11.662237    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:54:11.662250    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:54:11.674058    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:54:11.674069    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:54:11.687611    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:54:11.687622    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:54:11.699242    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:54:11.699253    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:54:11.711064    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:54:11.711074    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:54:11.731400    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:54:11.731410    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:54:11.748110    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:54:11.748119    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:54:11.759912    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:54:11.759922    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:54:11.795523    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:54:11.795532    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:54:11.814084    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:54:11.814094    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:54:14.327808    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:54:19.330360    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:54:19.330743    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:54:19.363489    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:54:19.363651    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:54:19.383134    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:54:19.383262    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:54:19.397742    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:54:19.397839    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:54:19.410558    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:54:19.410635    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:54:19.426395    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:54:19.426480    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:54:19.437077    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:54:19.437150    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:54:19.447745    4163 logs.go:276] 0 containers: []
	W0916 10:54:19.447756    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:54:19.447826    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:54:19.458518    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:54:19.458536    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:54:19.458541    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:54:19.462764    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:54:19.462770    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:54:19.474494    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:54:19.474505    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:54:19.492622    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:54:19.492631    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:54:19.503794    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:54:19.503804    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:54:19.519484    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:54:19.519494    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:54:19.537317    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:54:19.537325    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:54:19.550429    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:54:19.550439    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:54:19.562006    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:54:19.562015    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:54:19.573987    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:54:19.574003    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:54:19.585998    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:54:19.586007    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:54:19.609681    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:54:19.609689    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:54:19.645019    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:54:19.645025    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:54:19.679579    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:54:19.679591    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:54:19.692093    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:54:19.692106    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:54:22.209212    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:54:27.211712    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:54:27.211795    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:54:27.223533    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:54:27.223611    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:54:27.235028    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:54:27.235116    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:54:27.250169    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:54:27.250238    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:54:27.261563    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:54:27.261632    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:54:27.272723    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:54:27.272808    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:54:27.284241    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:54:27.284340    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:54:27.295504    4163 logs.go:276] 0 containers: []
	W0916 10:54:27.295517    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:54:27.295594    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:54:27.311567    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:54:27.311588    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:54:27.311593    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:54:27.323702    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:54:27.323714    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:54:27.350079    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:54:27.350094    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:54:27.364987    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:54:27.364999    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:54:27.378535    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:54:27.378548    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:54:27.395709    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:54:27.395725    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:54:27.411435    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:54:27.411447    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:54:27.448806    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:54:27.448824    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:54:27.453477    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:54:27.453490    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:54:27.496217    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:54:27.496229    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:54:27.512201    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:54:27.512214    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:54:27.524678    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:54:27.524690    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:54:27.537537    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:54:27.537546    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:54:27.551738    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:54:27.551749    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:54:27.564750    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:54:27.564761    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:54:30.086187    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:54:35.088988    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:54:35.089599    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:54:35.130792    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:54:35.130959    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:54:35.154428    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:54:35.154545    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:54:35.169708    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:54:35.169799    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:54:35.183373    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:54:35.183456    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:54:35.194232    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:54:35.194315    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:54:35.209884    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:54:35.209977    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:54:35.229654    4163 logs.go:276] 0 containers: []
	W0916 10:54:35.229667    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:54:35.229749    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:54:35.250312    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:54:35.250330    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:54:35.250337    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:54:35.285491    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:54:35.285506    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:54:35.297332    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:54:35.297346    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:54:35.308815    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:54:35.308828    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:54:35.320627    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:54:35.320640    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:54:35.331916    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:54:35.331928    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:54:35.343638    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:54:35.343649    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:54:35.377634    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:54:35.377644    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:54:35.394873    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:54:35.394883    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:54:35.408996    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:54:35.409007    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:54:35.425119    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:54:35.425128    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:54:35.429381    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:54:35.429390    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:54:35.442968    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:54:35.442979    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:54:35.454251    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:54:35.454262    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:54:35.471855    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:54:35.471865    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:54:37.997016    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:54:42.998331    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:54:42.998520    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:54:43.010243    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:54:43.010334    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:54:43.020336    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:54:43.020412    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:54:43.031393    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:54:43.031466    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:54:43.041891    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:54:43.041976    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:54:43.052285    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:54:43.052354    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:54:43.063054    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:54:43.063137    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:54:43.073440    4163 logs.go:276] 0 containers: []
	W0916 10:54:43.073451    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:54:43.073515    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:54:43.083602    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:54:43.083616    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:54:43.083621    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:54:43.103960    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:54:43.103975    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:54:43.119722    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:54:43.119735    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:54:43.131106    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:54:43.131116    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:54:43.136034    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:54:43.136039    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:54:43.169813    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:54:43.169829    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:54:43.181591    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:54:43.181601    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:54:43.193241    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:54:43.193256    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:54:43.207362    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:54:43.207371    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:54:43.220101    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:54:43.220116    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:54:43.236023    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:54:43.236033    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:54:43.271781    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:54:43.271789    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:54:43.283427    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:54:43.283438    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:54:43.306762    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:54:43.306769    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:54:43.323998    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:54:43.324007    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:54:45.837063    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:54:50.839644    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:54:50.840296    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0916 10:54:50.878908    4163 logs.go:276] 1 containers: [04f689c9f4c0]
	I0916 10:54:50.879075    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0916 10:54:50.900740    4163 logs.go:276] 1 containers: [147a2864e6a5]
	I0916 10:54:50.900860    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0916 10:54:50.915814    4163 logs.go:276] 4 containers: [8c28f5c3b6ca 9701721b959b 502b71507c91 b28b03a1a632]
	I0916 10:54:50.915911    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0916 10:54:50.931785    4163 logs.go:276] 1 containers: [2411390eb2f6]
	I0916 10:54:50.931864    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0916 10:54:50.946400    4163 logs.go:276] 1 containers: [770552188df4]
	I0916 10:54:50.946480    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0916 10:54:50.957400    4163 logs.go:276] 1 containers: [b280c108ee29]
	I0916 10:54:50.957479    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0916 10:54:50.970786    4163 logs.go:276] 0 containers: []
	W0916 10:54:50.970796    4163 logs.go:278] No container was found matching "kindnet"
	I0916 10:54:50.970864    4163 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0916 10:54:50.981388    4163 logs.go:276] 1 containers: [3f8ede346a2f]
	I0916 10:54:50.981405    4163 logs.go:123] Gathering logs for etcd [147a2864e6a5] ...
	I0916 10:54:50.981412    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 147a2864e6a5"
	I0916 10:54:50.995903    4163 logs.go:123] Gathering logs for container status ...
	I0916 10:54:50.995916    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 10:54:51.009025    4163 logs.go:123] Gathering logs for kubelet ...
	I0916 10:54:51.009040    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 10:54:51.045161    4163 logs.go:123] Gathering logs for kube-apiserver [04f689c9f4c0] ...
	I0916 10:54:51.045185    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f689c9f4c0"
	I0916 10:54:51.061460    4163 logs.go:123] Gathering logs for kube-scheduler [2411390eb2f6] ...
	I0916 10:54:51.061472    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2411390eb2f6"
	I0916 10:54:51.079674    4163 logs.go:123] Gathering logs for storage-provisioner [3f8ede346a2f] ...
	I0916 10:54:51.079689    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8ede346a2f"
	I0916 10:54:51.092552    4163 logs.go:123] Gathering logs for dmesg ...
	I0916 10:54:51.092565    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 10:54:51.097152    4163 logs.go:123] Gathering logs for coredns [b28b03a1a632] ...
	I0916 10:54:51.097167    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b28b03a1a632"
	I0916 10:54:51.110597    4163 logs.go:123] Gathering logs for coredns [9701721b959b] ...
	I0916 10:54:51.110608    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701721b959b"
	I0916 10:54:51.123389    4163 logs.go:123] Gathering logs for coredns [502b71507c91] ...
	I0916 10:54:51.123402    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502b71507c91"
	I0916 10:54:51.136861    4163 logs.go:123] Gathering logs for kube-controller-manager [b280c108ee29] ...
	I0916 10:54:51.136873    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b280c108ee29"
	I0916 10:54:51.155291    4163 logs.go:123] Gathering logs for Docker ...
	I0916 10:54:51.155311    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0916 10:54:51.180845    4163 logs.go:123] Gathering logs for describe nodes ...
	I0916 10:54:51.180862    4163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 10:54:51.220008    4163 logs.go:123] Gathering logs for coredns [8c28f5c3b6ca] ...
	I0916 10:54:51.220020    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c28f5c3b6ca"
	I0916 10:54:51.232501    4163 logs.go:123] Gathering logs for kube-proxy [770552188df4] ...
	I0916 10:54:51.232513    4163 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 770552188df4"
	I0916 10:54:53.746744    4163 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0916 10:54:58.748865    4163 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 10:54:58.754410    4163 out.go:201] 
	W0916 10:54:58.758500    4163 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0916 10:54:58.758512    4163 out.go:270] * 
	* 
	W0916 10:54:58.759231    4163 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:54:58.775298    4163 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-385000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.24s)

                                                
                                    
x
+
TestPause/serial/Start (10.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-303000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-303000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.986146583s)

                                                
                                                
-- stdout --
	* [pause-303000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-303000" primary control-plane node in "pause-303000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-303000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-303000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-303000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-303000 -n pause-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-303000 -n pause-303000: exit status 7 (65.405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-303000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-472000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-472000 --driver=qemu2 : exit status 80 (9.935327208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-472000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-472000" primary control-plane node in "NoKubernetes-472000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-472000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-472000 -n NoKubernetes-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-472000 -n NoKubernetes-472000: exit status 7 (61.532791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-472000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-472000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-472000 --no-kubernetes --driver=qemu2 : exit status 80 (5.239999833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-472000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-472000
	* Restarting existing qemu2 VM for "NoKubernetes-472000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-472000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-472000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-472000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-472000 -n NoKubernetes-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-472000 -n NoKubernetes-472000: exit status 7 (51.541875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-472000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-472000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-472000 --no-kubernetes --driver=qemu2 : exit status 80 (5.235534209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-472000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-472000
	* Restarting existing qemu2 VM for "NoKubernetes-472000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-472000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-472000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-472000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-472000 -n NoKubernetes-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-472000 -n NoKubernetes-472000: exit status 7 (57.995916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-472000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-472000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-472000 --driver=qemu2 : exit status 80 (5.252934916s)

                                                
                                                
-- stdout --
	* [NoKubernetes-472000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-472000
	* Restarting existing qemu2 VM for "NoKubernetes-472000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-472000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-472000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-472000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-472000 -n NoKubernetes-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-472000 -n NoKubernetes-472000: exit status 7 (47.207792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-472000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.923829916s)

                                                
                                                
-- stdout --
	* [auto-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-900000" primary control-plane node in "auto-900000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-900000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:53:15.933905    4410 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:53:15.934024    4410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:15.934027    4410 out.go:358] Setting ErrFile to fd 2...
	I0916 10:53:15.934030    4410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:15.934190    4410 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:53:15.935214    4410 out.go:352] Setting JSON to false
	I0916 10:53:15.951499    4410 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3159,"bootTime":1726506036,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:53:15.951602    4410 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:53:15.957709    4410 out.go:177] * [auto-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:53:15.965491    4410 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:53:15.965532    4410 notify.go:220] Checking for updates...
	I0916 10:53:15.973592    4410 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:53:15.975048    4410 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:53:15.978542    4410 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:53:15.981542    4410 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:53:15.994581    4410 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:53:15.997923    4410 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:53:15.997994    4410 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:53:15.998046    4410 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:53:16.002547    4410 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:53:16.007542    4410 start.go:297] selected driver: qemu2
	I0916 10:53:16.007547    4410 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:53:16.007551    4410 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:53:16.009651    4410 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:53:16.012540    4410 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:53:16.015637    4410 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:53:16.015654    4410 cni.go:84] Creating CNI manager for ""
	I0916 10:53:16.015674    4410 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:53:16.015682    4410 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:53:16.015710    4410 start.go:340] cluster config:
	{Name:auto-900000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:53:16.019377    4410 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:53:16.026594    4410 out.go:177] * Starting "auto-900000" primary control-plane node in "auto-900000" cluster
	I0916 10:53:16.030570    4410 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:53:16.030586    4410 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:53:16.030598    4410 cache.go:56] Caching tarball of preloaded images
	I0916 10:53:16.030658    4410 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:53:16.030663    4410 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:53:16.030733    4410 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/auto-900000/config.json ...
	I0916 10:53:16.030746    4410 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/auto-900000/config.json: {Name:mke6c178cd8fe012d1442e7345a3730da0afb041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:16.031154    4410 start.go:360] acquireMachinesLock for auto-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:53:16.031185    4410 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "auto-900000"
	I0916 10:53:16.031193    4410 start.go:93] Provisioning new machine with config: &{Name:auto-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:53:16.031223    4410 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:53:16.035607    4410 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:53:16.050753    4410 start.go:159] libmachine.API.Create for "auto-900000" (driver="qemu2")
	I0916 10:53:16.050782    4410 client.go:168] LocalClient.Create starting
	I0916 10:53:16.050841    4410 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:53:16.050869    4410 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:16.050879    4410 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:16.050934    4410 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:53:16.050957    4410 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:16.050966    4410 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:16.051373    4410 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:53:16.213595    4410 main.go:141] libmachine: Creating SSH key...
	I0916 10:53:16.269859    4410 main.go:141] libmachine: Creating Disk image...
	I0916 10:53:16.269865    4410 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:53:16.270041    4410 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/disk.qcow2
	I0916 10:53:16.279256    4410 main.go:141] libmachine: STDOUT: 
	I0916 10:53:16.279340    4410 main.go:141] libmachine: STDERR: 
	I0916 10:53:16.279404    4410 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/disk.qcow2 +20000M
	I0916 10:53:16.287499    4410 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:53:16.287538    4410 main.go:141] libmachine: STDERR: 
	I0916 10:53:16.287551    4410 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/disk.qcow2
	I0916 10:53:16.287555    4410 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:53:16.287565    4410 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:53:16.287601    4410 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:97:13:ea:38:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/disk.qcow2
	I0916 10:53:16.289216    4410 main.go:141] libmachine: STDOUT: 
	I0916 10:53:16.289253    4410 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:53:16.289275    4410 client.go:171] duration metric: took 238.494334ms to LocalClient.Create
	I0916 10:53:18.291396    4410 start.go:128] duration metric: took 2.2602135s to createHost
	I0916 10:53:18.291454    4410 start.go:83] releasing machines lock for "auto-900000", held for 2.260328958s
	W0916 10:53:18.291521    4410 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:53:18.298159    4410 out.go:177] * Deleting "auto-900000" in qemu2 ...
	W0916 10:53:18.329395    4410 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:53:18.329420    4410 start.go:729] Will try again in 5 seconds ...
	I0916 10:53:23.329685    4410 start.go:360] acquireMachinesLock for auto-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:53:23.330255    4410 start.go:364] duration metric: took 443.209µs to acquireMachinesLock for "auto-900000"
	I0916 10:53:23.330415    4410 start.go:93] Provisioning new machine with config: &{Name:auto-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:53:23.330671    4410 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:53:23.335193    4410 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:53:23.383678    4410 start.go:159] libmachine.API.Create for "auto-900000" (driver="qemu2")
	I0916 10:53:23.383730    4410 client.go:168] LocalClient.Create starting
	I0916 10:53:23.383854    4410 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:53:23.383928    4410 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:23.383948    4410 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:23.384010    4410 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:53:23.384060    4410 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:23.384085    4410 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:23.384659    4410 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:53:23.554628    4410 main.go:141] libmachine: Creating SSH key...
	I0916 10:53:23.765582    4410 main.go:141] libmachine: Creating Disk image...
	I0916 10:53:23.765594    4410 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:53:23.765809    4410 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/disk.qcow2
	I0916 10:53:23.775730    4410 main.go:141] libmachine: STDOUT: 
	I0916 10:53:23.775799    4410 main.go:141] libmachine: STDERR: 
	I0916 10:53:23.775854    4410 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/disk.qcow2 +20000M
	I0916 10:53:23.783928    4410 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:53:23.783968    4410 main.go:141] libmachine: STDERR: 
	I0916 10:53:23.783978    4410 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/disk.qcow2
	I0916 10:53:23.783985    4410 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:53:23.783993    4410 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:53:23.784018    4410 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:0c:10:38:df:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/auto-900000/disk.qcow2
	I0916 10:53:23.785715    4410 main.go:141] libmachine: STDOUT: 
	I0916 10:53:23.785728    4410 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:53:23.785742    4410 client.go:171] duration metric: took 402.017667ms to LocalClient.Create
	I0916 10:53:25.787821    4410 start.go:128] duration metric: took 2.45719825s to createHost
	I0916 10:53:25.787863    4410 start.go:83] releasing machines lock for "auto-900000", held for 2.457661125s
	W0916 10:53:25.788086    4410 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:53:25.804462    4410 out.go:201] 
	W0916 10:53:25.807683    4410 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:53:25.807697    4410 out.go:270] * 
	* 
	W0916 10:53:25.808949    4410 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:53:25.820620    4410 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.880530167s)

                                                
                                                
-- stdout --
	* [kindnet-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-900000" primary control-plane node in "kindnet-900000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-900000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:53:27.967078    4519 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:53:27.967203    4519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:27.967206    4519 out.go:358] Setting ErrFile to fd 2...
	I0916 10:53:27.967214    4519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:27.967354    4519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:53:27.968460    4519 out.go:352] Setting JSON to false
	I0916 10:53:27.985208    4519 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3171,"bootTime":1726506036,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:53:27.985283    4519 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:53:27.991128    4519 out.go:177] * [kindnet-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:53:27.998985    4519 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:53:27.999047    4519 notify.go:220] Checking for updates...
	I0916 10:53:28.005910    4519 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:53:28.008942    4519 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:53:28.011904    4519 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:53:28.014897    4519 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:53:28.017884    4519 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:53:28.021230    4519 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:53:28.021292    4519 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:53:28.021340    4519 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:53:28.025867    4519 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:53:28.031901    4519 start.go:297] selected driver: qemu2
	I0916 10:53:28.031907    4519 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:53:28.031915    4519 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:53:28.034165    4519 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:53:28.036923    4519 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:53:28.039978    4519 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:53:28.039993    4519 cni.go:84] Creating CNI manager for "kindnet"
	I0916 10:53:28.039996    4519 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:53:28.040024    4519 start.go:340] cluster config:
	{Name:kindnet-900000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:53:28.043646    4519 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:53:28.050917    4519 out.go:177] * Starting "kindnet-900000" primary control-plane node in "kindnet-900000" cluster
	I0916 10:53:28.054971    4519 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:53:28.054988    4519 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:53:28.055001    4519 cache.go:56] Caching tarball of preloaded images
	I0916 10:53:28.055070    4519 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:53:28.055076    4519 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:53:28.055126    4519 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/kindnet-900000/config.json ...
	I0916 10:53:28.055138    4519 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/kindnet-900000/config.json: {Name:mkd51ba6c113cec81cc2c80e427f75a75fe93eb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:28.055350    4519 start.go:360] acquireMachinesLock for kindnet-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:53:28.055381    4519 start.go:364] duration metric: took 26.125µs to acquireMachinesLock for "kindnet-900000"
	I0916 10:53:28.055392    4519 start.go:93] Provisioning new machine with config: &{Name:kindnet-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:53:28.055426    4519 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:53:28.063950    4519 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:53:28.080494    4519 start.go:159] libmachine.API.Create for "kindnet-900000" (driver="qemu2")
	I0916 10:53:28.080523    4519 client.go:168] LocalClient.Create starting
	I0916 10:53:28.080590    4519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:53:28.080625    4519 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:28.080635    4519 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:28.080672    4519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:53:28.080698    4519 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:28.080706    4519 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:28.081067    4519 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:53:28.245504    4519 main.go:141] libmachine: Creating SSH key...
	I0916 10:53:28.399262    4519 main.go:141] libmachine: Creating Disk image...
	I0916 10:53:28.399271    4519 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:53:28.399507    4519 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/disk.qcow2
	I0916 10:53:28.409274    4519 main.go:141] libmachine: STDOUT: 
	I0916 10:53:28.409294    4519 main.go:141] libmachine: STDERR: 
	I0916 10:53:28.409348    4519 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/disk.qcow2 +20000M
	I0916 10:53:28.417289    4519 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:53:28.417309    4519 main.go:141] libmachine: STDERR: 
	I0916 10:53:28.417331    4519 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/disk.qcow2
	I0916 10:53:28.417336    4519 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:53:28.417345    4519 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:53:28.417377    4519 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:e2:7d:22:64:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/disk.qcow2
	I0916 10:53:28.419114    4519 main.go:141] libmachine: STDOUT: 
	I0916 10:53:28.419127    4519 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:53:28.419148    4519 client.go:171] duration metric: took 338.628709ms to LocalClient.Create
	I0916 10:53:30.421270    4519 start.go:128] duration metric: took 2.365885667s to createHost
	I0916 10:53:30.421306    4519 start.go:83] releasing machines lock for "kindnet-900000", held for 2.365988041s
	W0916 10:53:30.421338    4519 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:53:30.436450    4519 out.go:177] * Deleting "kindnet-900000" in qemu2 ...
	W0916 10:53:30.460839    4519 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:53:30.460852    4519 start.go:729] Will try again in 5 seconds ...
	I0916 10:53:35.461257    4519 start.go:360] acquireMachinesLock for kindnet-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:53:35.461526    4519 start.go:364] duration metric: took 233.125µs to acquireMachinesLock for "kindnet-900000"
	I0916 10:53:35.461558    4519 start.go:93] Provisioning new machine with config: &{Name:kindnet-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:53:35.461731    4519 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:53:35.467186    4519 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:53:35.494440    4519 start.go:159] libmachine.API.Create for "kindnet-900000" (driver="qemu2")
	I0916 10:53:35.494480    4519 client.go:168] LocalClient.Create starting
	I0916 10:53:35.494552    4519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:53:35.494602    4519 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:35.494616    4519 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:35.494658    4519 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:53:35.494696    4519 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:35.494704    4519 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:35.495095    4519 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:53:35.658000    4519 main.go:141] libmachine: Creating SSH key...
	I0916 10:53:35.762597    4519 main.go:141] libmachine: Creating Disk image...
	I0916 10:53:35.762603    4519 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:53:35.762803    4519 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/disk.qcow2
	I0916 10:53:35.772346    4519 main.go:141] libmachine: STDOUT: 
	I0916 10:53:35.772370    4519 main.go:141] libmachine: STDERR: 
	I0916 10:53:35.772435    4519 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/disk.qcow2 +20000M
	I0916 10:53:35.780473    4519 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:53:35.780490    4519 main.go:141] libmachine: STDERR: 
	I0916 10:53:35.780508    4519 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/disk.qcow2
	I0916 10:53:35.780512    4519 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:53:35.780524    4519 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:53:35.780552    4519 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:51:2e:92:cb:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kindnet-900000/disk.qcow2
	I0916 10:53:35.782234    4519 main.go:141] libmachine: STDOUT: 
	I0916 10:53:35.782260    4519 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:53:35.782273    4519 client.go:171] duration metric: took 287.796583ms to LocalClient.Create
	I0916 10:53:37.782660    4519 start.go:128] duration metric: took 2.320980792s to createHost
	I0916 10:53:37.782707    4519 start.go:83] releasing machines lock for "kindnet-900000", held for 2.321238875s
	W0916 10:53:37.782974    4519 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:53:37.792045    4519 out.go:201] 
	W0916 10:53:37.796135    4519 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:53:37.796165    4519 out.go:270] * 
	* 
	W0916 10:53:37.797123    4519 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:53:37.808919    4519 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.799343084s)

                                                
                                                
-- stdout --
	* [calico-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-900000" primary control-plane node in "calico-900000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-900000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:53:40.040859    4632 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:53:40.041027    4632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:40.041033    4632 out.go:358] Setting ErrFile to fd 2...
	I0916 10:53:40.041036    4632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:40.041172    4632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:53:40.042452    4632 out.go:352] Setting JSON to false
	I0916 10:53:40.060986    4632 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3184,"bootTime":1726506036,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:53:40.061069    4632 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:53:40.066701    4632 out.go:177] * [calico-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:53:40.073621    4632 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:53:40.073729    4632 notify.go:220] Checking for updates...
	I0916 10:53:40.080679    4632 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:53:40.083626    4632 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:53:40.086696    4632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:53:40.089655    4632 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:53:40.090883    4632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:53:40.094065    4632 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:53:40.094134    4632 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:53:40.094182    4632 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:53:40.098688    4632 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:53:40.103676    4632 start.go:297] selected driver: qemu2
	I0916 10:53:40.103685    4632 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:53:40.103691    4632 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:53:40.106033    4632 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:53:40.109646    4632 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:53:40.112805    4632 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:53:40.112826    4632 cni.go:84] Creating CNI manager for "calico"
	I0916 10:53:40.112831    4632 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0916 10:53:40.112877    4632 start.go:340] cluster config:
	{Name:calico-900000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:53:40.116668    4632 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:53:40.123667    4632 out.go:177] * Starting "calico-900000" primary control-plane node in "calico-900000" cluster
	I0916 10:53:40.127620    4632 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:53:40.127653    4632 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:53:40.127669    4632 cache.go:56] Caching tarball of preloaded images
	I0916 10:53:40.127768    4632 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:53:40.127775    4632 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:53:40.127853    4632 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/calico-900000/config.json ...
	I0916 10:53:40.127865    4632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/calico-900000/config.json: {Name:mka2a04986849041a287bbad7c703150a865749c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:40.128219    4632 start.go:360] acquireMachinesLock for calico-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:53:40.128250    4632 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "calico-900000"
	I0916 10:53:40.128266    4632 start.go:93] Provisioning new machine with config: &{Name:calico-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:53:40.128298    4632 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:53:40.132734    4632 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:53:40.149268    4632 start.go:159] libmachine.API.Create for "calico-900000" (driver="qemu2")
	I0916 10:53:40.149300    4632 client.go:168] LocalClient.Create starting
	I0916 10:53:40.149368    4632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:53:40.149401    4632 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:40.149412    4632 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:40.149452    4632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:53:40.149477    4632 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:40.149485    4632 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:40.149894    4632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:53:40.310048    4632 main.go:141] libmachine: Creating SSH key...
	I0916 10:53:40.364585    4632 main.go:141] libmachine: Creating Disk image...
	I0916 10:53:40.364594    4632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:53:40.364801    4632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/disk.qcow2
	I0916 10:53:40.375119    4632 main.go:141] libmachine: STDOUT: 
	I0916 10:53:40.375139    4632 main.go:141] libmachine: STDERR: 
	I0916 10:53:40.375212    4632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/disk.qcow2 +20000M
	I0916 10:53:40.383542    4632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:53:40.383566    4632 main.go:141] libmachine: STDERR: 
	I0916 10:53:40.383580    4632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/disk.qcow2
	I0916 10:53:40.383585    4632 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:53:40.383597    4632 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:53:40.383626    4632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:16:73:35:de:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/disk.qcow2
	I0916 10:53:40.385309    4632 main.go:141] libmachine: STDOUT: 
	I0916 10:53:40.385322    4632 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:53:40.385342    4632 client.go:171] duration metric: took 236.043917ms to LocalClient.Create
	I0916 10:53:42.387455    4632 start.go:128] duration metric: took 2.259200833s to createHost
	I0916 10:53:42.387542    4632 start.go:83] releasing machines lock for "calico-900000", held for 2.259351541s
	W0916 10:53:42.387611    4632 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:53:42.393365    4632 out.go:177] * Deleting "calico-900000" in qemu2 ...
	W0916 10:53:42.421954    4632 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:53:42.421972    4632 start.go:729] Will try again in 5 seconds ...
	I0916 10:53:47.423993    4632 start.go:360] acquireMachinesLock for calico-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:53:47.424637    4632 start.go:364] duration metric: took 557.625µs to acquireMachinesLock for "calico-900000"
	I0916 10:53:47.424766    4632 start.go:93] Provisioning new machine with config: &{Name:calico-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:53:47.424995    4632 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:53:47.431474    4632 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:53:47.476779    4632 start.go:159] libmachine.API.Create for "calico-900000" (driver="qemu2")
	I0916 10:53:47.476845    4632 client.go:168] LocalClient.Create starting
	I0916 10:53:47.476991    4632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:53:47.477063    4632 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:47.477083    4632 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:47.477145    4632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:53:47.477195    4632 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:47.477211    4632 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:47.477863    4632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:53:47.647656    4632 main.go:141] libmachine: Creating SSH key...
	I0916 10:53:47.745693    4632 main.go:141] libmachine: Creating Disk image...
	I0916 10:53:47.745706    4632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:53:47.745920    4632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/disk.qcow2
	I0916 10:53:47.755538    4632 main.go:141] libmachine: STDOUT: 
	I0916 10:53:47.755557    4632 main.go:141] libmachine: STDERR: 
	I0916 10:53:47.755616    4632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/disk.qcow2 +20000M
	I0916 10:53:47.763557    4632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:53:47.763574    4632 main.go:141] libmachine: STDERR: 
	I0916 10:53:47.763587    4632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/disk.qcow2
	I0916 10:53:47.763591    4632 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:53:47.763601    4632 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:53:47.763624    4632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:f2:9c:59:e8:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/calico-900000/disk.qcow2
	I0916 10:53:47.765331    4632 main.go:141] libmachine: STDOUT: 
	I0916 10:53:47.765344    4632 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:53:47.765356    4632 client.go:171] duration metric: took 288.515041ms to LocalClient.Create
	I0916 10:53:49.767395    4632 start.go:128] duration metric: took 2.342450541s to createHost
	I0916 10:53:49.767496    4632 start.go:83] releasing machines lock for "calico-900000", held for 2.342865s
	W0916 10:53:49.767643    4632 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:53:49.782944    4632 out.go:201] 
	W0916 10:53:49.786059    4632 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:53:49.786066    4632 out.go:270] * 
	* 
	W0916 10:53:49.786627    4632 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:53:49.800045    4632 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (10.017614417s)

                                                
                                                
-- stdout --
	* [custom-flannel-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-900000" primary control-plane node in "custom-flannel-900000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-900000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:53:52.201046    4753 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:53:52.201183    4753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:52.201187    4753 out.go:358] Setting ErrFile to fd 2...
	I0916 10:53:52.201190    4753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:53:52.201349    4753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:53:52.202376    4753 out.go:352] Setting JSON to false
	I0916 10:53:52.219096    4753 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3196,"bootTime":1726506036,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:53:52.219167    4753 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:53:52.225373    4753 out.go:177] * [custom-flannel-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:53:52.233187    4753 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:53:52.233238    4753 notify.go:220] Checking for updates...
	I0916 10:53:52.240164    4753 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:53:52.243157    4753 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:53:52.246180    4753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:53:52.249144    4753 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:53:52.252183    4753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:53:52.255499    4753 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:53:52.255565    4753 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:53:52.255615    4753 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:53:52.260149    4753 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:53:52.267119    4753 start.go:297] selected driver: qemu2
	I0916 10:53:52.267125    4753 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:53:52.267131    4753 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:53:52.269318    4753 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:53:52.272133    4753 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:53:52.275225    4753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:53:52.275241    4753 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0916 10:53:52.275248    4753 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0916 10:53:52.275273    4753 start.go:340] cluster config:
	{Name:custom-flannel-900000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:53:52.278629    4753 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:53:52.286147    4753 out.go:177] * Starting "custom-flannel-900000" primary control-plane node in "custom-flannel-900000" cluster
	I0916 10:53:52.290295    4753 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:53:52.290313    4753 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:53:52.290329    4753 cache.go:56] Caching tarball of preloaded images
	I0916 10:53:52.290394    4753 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:53:52.290399    4753 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:53:52.290453    4753 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/custom-flannel-900000/config.json ...
	I0916 10:53:52.290463    4753 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/custom-flannel-900000/config.json: {Name:mk9644b3de56f27b189b1a8c87142e03522bbbeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:53:52.290683    4753 start.go:360] acquireMachinesLock for custom-flannel-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:53:52.290713    4753 start.go:364] duration metric: took 24.042µs to acquireMachinesLock for "custom-flannel-900000"
	I0916 10:53:52.290722    4753 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:53:52.290753    4753 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:53:52.299109    4753 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:53:52.315004    4753 start.go:159] libmachine.API.Create for "custom-flannel-900000" (driver="qemu2")
	I0916 10:53:52.315032    4753 client.go:168] LocalClient.Create starting
	I0916 10:53:52.315091    4753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:53:52.315127    4753 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:52.315136    4753 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:52.315170    4753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:53:52.315196    4753 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:52.315203    4753 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:52.315586    4753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:53:52.487565    4753 main.go:141] libmachine: Creating SSH key...
	I0916 10:53:52.626565    4753 main.go:141] libmachine: Creating Disk image...
	I0916 10:53:52.626577    4753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:53:52.626779    4753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/disk.qcow2
	I0916 10:53:52.636587    4753 main.go:141] libmachine: STDOUT: 
	I0916 10:53:52.636619    4753 main.go:141] libmachine: STDERR: 
	I0916 10:53:52.636683    4753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/disk.qcow2 +20000M
	I0916 10:53:52.644900    4753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:53:52.644915    4753 main.go:141] libmachine: STDERR: 
	I0916 10:53:52.644930    4753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/disk.qcow2
	I0916 10:53:52.644936    4753 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:53:52.644947    4753 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:53:52.644973    4753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:51:23:86:6a:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/disk.qcow2
	I0916 10:53:52.646654    4753 main.go:141] libmachine: STDOUT: 
	I0916 10:53:52.646667    4753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:53:52.646692    4753 client.go:171] duration metric: took 331.663041ms to LocalClient.Create
	I0916 10:53:54.648745    4753 start.go:128] duration metric: took 2.358049792s to createHost
	I0916 10:53:54.648816    4753 start.go:83] releasing machines lock for "custom-flannel-900000", held for 2.358168042s
	W0916 10:53:54.648848    4753 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:53:54.658206    4753 out.go:177] * Deleting "custom-flannel-900000" in qemu2 ...
	W0916 10:53:54.683297    4753 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:53:54.683314    4753 start.go:729] Will try again in 5 seconds ...
	I0916 10:53:59.685316    4753 start.go:360] acquireMachinesLock for custom-flannel-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:53:59.685799    4753 start.go:364] duration metric: took 406.417µs to acquireMachinesLock for "custom-flannel-900000"
	I0916 10:53:59.685855    4753 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:53:59.686061    4753 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:53:59.694636    4753 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:53:59.739231    4753 start.go:159] libmachine.API.Create for "custom-flannel-900000" (driver="qemu2")
	I0916 10:53:59.739319    4753 client.go:168] LocalClient.Create starting
	I0916 10:53:59.739508    4753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:53:59.739570    4753 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:59.739586    4753 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:59.739651    4753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:53:59.739699    4753 main.go:141] libmachine: Decoding PEM data...
	I0916 10:53:59.739710    4753 main.go:141] libmachine: Parsing certificate...
	I0916 10:53:59.740260    4753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:53:59.908456    4753 main.go:141] libmachine: Creating SSH key...
	I0916 10:54:00.125614    4753 main.go:141] libmachine: Creating Disk image...
	I0916 10:54:00.125627    4753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:54:00.125877    4753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/disk.qcow2
	I0916 10:54:00.135827    4753 main.go:141] libmachine: STDOUT: 
	I0916 10:54:00.135851    4753 main.go:141] libmachine: STDERR: 
	I0916 10:54:00.135913    4753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/disk.qcow2 +20000M
	I0916 10:54:00.144095    4753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:54:00.144109    4753 main.go:141] libmachine: STDERR: 
	I0916 10:54:00.144122    4753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/disk.qcow2
	I0916 10:54:00.144127    4753 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:54:00.144136    4753 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:54:00.144186    4753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:ba:3f:ca:6e:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/custom-flannel-900000/disk.qcow2
	I0916 10:54:00.145936    4753 main.go:141] libmachine: STDOUT: 
	I0916 10:54:00.145950    4753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:54:00.145974    4753 client.go:171] duration metric: took 406.649041ms to LocalClient.Create
	I0916 10:54:02.148128    4753 start.go:128] duration metric: took 2.462101833s to createHost
	I0916 10:54:02.148313    4753 start.go:83] releasing machines lock for "custom-flannel-900000", held for 2.462519291s
	W0916 10:54:02.148742    4753 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:02.158396    4753 out.go:201] 
	W0916 10:54:02.165426    4753 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:54:02.165461    4753 out.go:270] * 
	* 
	W0916 10:54:02.168193    4753 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:54:02.177321    4753 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (10.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (10.119696s)

                                                
                                                
-- stdout --
	* [false-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-900000" primary control-plane node in "false-900000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-900000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:54:04.612698    4871 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:54:04.612841    4871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:54:04.612854    4871 out.go:358] Setting ErrFile to fd 2...
	I0916 10:54:04.612856    4871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:54:04.613012    4871 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:54:04.614447    4871 out.go:352] Setting JSON to false
	I0916 10:54:04.631440    4871 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3208,"bootTime":1726506036,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:54:04.631504    4871 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:54:04.637926    4871 out.go:177] * [false-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:54:04.645854    4871 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:54:04.645908    4871 notify.go:220] Checking for updates...
	I0916 10:54:04.653845    4871 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:54:04.657804    4871 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:54:04.660823    4871 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:54:04.663868    4871 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:54:04.666742    4871 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:54:04.670116    4871 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:54:04.670191    4871 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:54:04.670240    4871 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:54:04.673814    4871 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:54:04.680864    4871 start.go:297] selected driver: qemu2
	I0916 10:54:04.680870    4871 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:54:04.680876    4871 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:54:04.683012    4871 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:54:04.685819    4871 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:54:04.687459    4871 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:54:04.687473    4871 cni.go:84] Creating CNI manager for "false"
	I0916 10:54:04.687496    4871 start.go:340] cluster config:
	{Name:false-900000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:54:04.690988    4871 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:54:04.697902    4871 out.go:177] * Starting "false-900000" primary control-plane node in "false-900000" cluster
	I0916 10:54:04.701788    4871 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:54:04.701803    4871 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:54:04.701811    4871 cache.go:56] Caching tarball of preloaded images
	I0916 10:54:04.701869    4871 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:54:04.701874    4871 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:54:04.701920    4871 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/false-900000/config.json ...
	I0916 10:54:04.701930    4871 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/false-900000/config.json: {Name:mk6f1e12e44a40719d6c5820f4b3fd664d98cc4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:54:04.702135    4871 start.go:360] acquireMachinesLock for false-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:54:04.702169    4871 start.go:364] duration metric: took 28.833µs to acquireMachinesLock for "false-900000"
	I0916 10:54:04.702178    4871 start.go:93] Provisioning new machine with config: &{Name:false-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:54:04.702218    4871 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:54:04.710792    4871 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:54:04.725890    4871 start.go:159] libmachine.API.Create for "false-900000" (driver="qemu2")
	I0916 10:54:04.725917    4871 client.go:168] LocalClient.Create starting
	I0916 10:54:04.725983    4871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:54:04.726018    4871 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:04.726025    4871 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:04.726061    4871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:54:04.726084    4871 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:04.726092    4871 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:04.726447    4871 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:54:04.887621    4871 main.go:141] libmachine: Creating SSH key...
	I0916 10:54:05.113709    4871 main.go:141] libmachine: Creating Disk image...
	I0916 10:54:05.113720    4871 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:54:05.113949    4871 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/disk.qcow2
	I0916 10:54:05.123906    4871 main.go:141] libmachine: STDOUT: 
	I0916 10:54:05.123924    4871 main.go:141] libmachine: STDERR: 
	I0916 10:54:05.123984    4871 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/disk.qcow2 +20000M
	I0916 10:54:05.132078    4871 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:54:05.132092    4871 main.go:141] libmachine: STDERR: 
	I0916 10:54:05.132106    4871 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/disk.qcow2
	I0916 10:54:05.132110    4871 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:54:05.132126    4871 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:54:05.132157    4871 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:f3:a4:4c:3f:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/disk.qcow2
	I0916 10:54:05.133826    4871 main.go:141] libmachine: STDOUT: 
	I0916 10:54:05.133839    4871 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:54:05.133863    4871 client.go:171] duration metric: took 407.953042ms to LocalClient.Create
	I0916 10:54:07.136018    4871 start.go:128] duration metric: took 2.433838s to createHost
	I0916 10:54:07.136144    4871 start.go:83] releasing machines lock for "false-900000", held for 2.433986625s
	W0916 10:54:07.136220    4871 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:07.142488    4871 out.go:177] * Deleting "false-900000" in qemu2 ...
	W0916 10:54:07.173183    4871 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:07.173214    4871 start.go:729] Will try again in 5 seconds ...
	I0916 10:54:12.175193    4871 start.go:360] acquireMachinesLock for false-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:54:12.175561    4871 start.go:364] duration metric: took 313.833µs to acquireMachinesLock for "false-900000"
	I0916 10:54:12.175637    4871 start.go:93] Provisioning new machine with config: &{Name:false-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:54:12.175770    4871 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:54:12.185212    4871 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:54:12.223856    4871 start.go:159] libmachine.API.Create for "false-900000" (driver="qemu2")
	I0916 10:54:12.223899    4871 client.go:168] LocalClient.Create starting
	I0916 10:54:12.224022    4871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:54:12.224086    4871 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:12.224100    4871 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:12.224177    4871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:54:12.224222    4871 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:12.224231    4871 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:12.224809    4871 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:54:12.393510    4871 main.go:141] libmachine: Creating SSH key...
	I0916 10:54:12.630365    4871 main.go:141] libmachine: Creating Disk image...
	I0916 10:54:12.630378    4871 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:54:12.630595    4871 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/disk.qcow2
	I0916 10:54:12.641012    4871 main.go:141] libmachine: STDOUT: 
	I0916 10:54:12.641036    4871 main.go:141] libmachine: STDERR: 
	I0916 10:54:12.641165    4871 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/disk.qcow2 +20000M
	I0916 10:54:12.650092    4871 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:54:12.650116    4871 main.go:141] libmachine: STDERR: 
	I0916 10:54:12.650134    4871 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/disk.qcow2
	I0916 10:54:12.650137    4871 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:54:12.650150    4871 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:54:12.650184    4871 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:fd:82:71:e1:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/false-900000/disk.qcow2
	I0916 10:54:12.651968    4871 main.go:141] libmachine: STDOUT: 
	I0916 10:54:12.651982    4871 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:54:12.651999    4871 client.go:171] duration metric: took 428.106834ms to LocalClient.Create
	I0916 10:54:14.654125    4871 start.go:128] duration metric: took 2.478386209s to createHost
	I0916 10:54:14.654194    4871 start.go:83] releasing machines lock for "false-900000", held for 2.478691583s
	W0916 10:54:14.654383    4871 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:14.671834    4871 out.go:201] 
	W0916 10:54:14.676739    4871 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:54:14.676757    4871 out.go:270] * 
	* 
	W0916 10:54:14.678115    4871 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:54:14.690842    4871 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (10.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.855899583s)

                                                
                                                
-- stdout --
	* [enable-default-cni-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-900000" primary control-plane node in "enable-default-cni-900000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-900000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:54:16.895946    4983 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:54:16.896087    4983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:54:16.896090    4983 out.go:358] Setting ErrFile to fd 2...
	I0916 10:54:16.896093    4983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:54:16.896222    4983 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:54:16.897342    4983 out.go:352] Setting JSON to false
	I0916 10:54:16.913796    4983 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3220,"bootTime":1726506036,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:54:16.913867    4983 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:54:16.920414    4983 out.go:177] * [enable-default-cni-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:54:16.928257    4983 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:54:16.928306    4983 notify.go:220] Checking for updates...
	I0916 10:54:16.933835    4983 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:54:16.937187    4983 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:54:16.940229    4983 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:54:16.943258    4983 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:54:16.946231    4983 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:54:16.949568    4983 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:54:16.949639    4983 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:54:16.949697    4983 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:54:16.954222    4983 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:54:16.961200    4983 start.go:297] selected driver: qemu2
	I0916 10:54:16.961208    4983 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:54:16.961215    4983 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:54:16.963474    4983 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:54:16.966180    4983 out.go:177] * Automatically selected the socket_vmnet network
	E0916 10:54:16.969208    4983 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0916 10:54:16.969219    4983 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:54:16.969233    4983 cni.go:84] Creating CNI manager for "bridge"
	I0916 10:54:16.969238    4983 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:54:16.969265    4983 start.go:340] cluster config:
	{Name:enable-default-cni-900000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:54:16.972717    4983 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:54:16.980025    4983 out.go:177] * Starting "enable-default-cni-900000" primary control-plane node in "enable-default-cni-900000" cluster
	I0916 10:54:16.984189    4983 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:54:16.984203    4983 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:54:16.984210    4983 cache.go:56] Caching tarball of preloaded images
	I0916 10:54:16.984262    4983 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:54:16.984268    4983 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:54:16.984313    4983 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/enable-default-cni-900000/config.json ...
	I0916 10:54:16.984323    4983 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/enable-default-cni-900000/config.json: {Name:mk9143d72b2a92429c044479c245421eb2089011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:54:16.984536    4983 start.go:360] acquireMachinesLock for enable-default-cni-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:54:16.984569    4983 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "enable-default-cni-900000"
	I0916 10:54:16.984579    4983 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:54:16.984620    4983 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:54:16.993193    4983 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:54:17.008953    4983 start.go:159] libmachine.API.Create for "enable-default-cni-900000" (driver="qemu2")
	I0916 10:54:17.008986    4983 client.go:168] LocalClient.Create starting
	I0916 10:54:17.009057    4983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:54:17.009094    4983 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:17.009105    4983 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:17.009143    4983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:54:17.009169    4983 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:17.009175    4983 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:17.009523    4983 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:54:17.172444    4983 main.go:141] libmachine: Creating SSH key...
	I0916 10:54:17.244694    4983 main.go:141] libmachine: Creating Disk image...
	I0916 10:54:17.244701    4983 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:54:17.244871    4983 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/disk.qcow2
	I0916 10:54:17.254325    4983 main.go:141] libmachine: STDOUT: 
	I0916 10:54:17.254347    4983 main.go:141] libmachine: STDERR: 
	I0916 10:54:17.254408    4983 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/disk.qcow2 +20000M
	I0916 10:54:17.262292    4983 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:54:17.262306    4983 main.go:141] libmachine: STDERR: 
	I0916 10:54:17.262325    4983 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/disk.qcow2
	I0916 10:54:17.262330    4983 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:54:17.262341    4983 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:54:17.262364    4983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:be:e3:fc:b3:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/disk.qcow2
	I0916 10:54:17.263984    4983 main.go:141] libmachine: STDOUT: 
	I0916 10:54:17.264002    4983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:54:17.264024    4983 client.go:171] duration metric: took 255.038333ms to LocalClient.Create
	I0916 10:54:19.266208    4983 start.go:128] duration metric: took 2.281624375s to createHost
	I0916 10:54:19.266298    4983 start.go:83] releasing machines lock for "enable-default-cni-900000", held for 2.281787459s
	W0916 10:54:19.266355    4983 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:19.273664    4983 out.go:177] * Deleting "enable-default-cni-900000" in qemu2 ...
	W0916 10:54:19.314918    4983 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:19.314946    4983 start.go:729] Will try again in 5 seconds ...
	I0916 10:54:24.317124    4983 start.go:360] acquireMachinesLock for enable-default-cni-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:54:24.317724    4983 start.go:364] duration metric: took 475.792µs to acquireMachinesLock for "enable-default-cni-900000"
	I0916 10:54:24.317877    4983 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:54:24.318153    4983 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:54:24.323871    4983 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:54:24.374285    4983 start.go:159] libmachine.API.Create for "enable-default-cni-900000" (driver="qemu2")
	I0916 10:54:24.374359    4983 client.go:168] LocalClient.Create starting
	I0916 10:54:24.374528    4983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:54:24.374605    4983 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:24.374620    4983 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:24.374689    4983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:54:24.374736    4983 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:24.374747    4983 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:24.375305    4983 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:54:24.548759    4983 main.go:141] libmachine: Creating SSH key...
	I0916 10:54:24.661090    4983 main.go:141] libmachine: Creating Disk image...
	I0916 10:54:24.661098    4983 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:54:24.661290    4983 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/disk.qcow2
	I0916 10:54:24.671192    4983 main.go:141] libmachine: STDOUT: 
	I0916 10:54:24.671209    4983 main.go:141] libmachine: STDERR: 
	I0916 10:54:24.671282    4983 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/disk.qcow2 +20000M
	I0916 10:54:24.679535    4983 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:54:24.679551    4983 main.go:141] libmachine: STDERR: 
	I0916 10:54:24.679565    4983 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/disk.qcow2
	I0916 10:54:24.679569    4983 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:54:24.679577    4983 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:54:24.679622    4983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:3f:ff:2b:ac:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/enable-default-cni-900000/disk.qcow2
	I0916 10:54:24.681335    4983 main.go:141] libmachine: STDOUT: 
	I0916 10:54:24.681350    4983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:54:24.681363    4983 client.go:171] duration metric: took 306.99325ms to LocalClient.Create
	I0916 10:54:26.683488    4983 start.go:128] duration metric: took 2.365369625s to createHost
	I0916 10:54:26.683598    4983 start.go:83] releasing machines lock for "enable-default-cni-900000", held for 2.365918209s
	W0916 10:54:26.683906    4983 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:26.693436    4983 out.go:201] 
	W0916 10:54:26.700478    4983 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:54:26.700512    4983 out.go:270] * 
	* 
	W0916 10:54:26.702541    4983 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:54:26.710358    4983 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.8673785s)

                                                
                                                
-- stdout --
	* [flannel-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-900000" primary control-plane node in "flannel-900000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-900000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:54:28.954861    5092 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:54:28.955009    5092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:54:28.955012    5092 out.go:358] Setting ErrFile to fd 2...
	I0916 10:54:28.955015    5092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:54:28.955148    5092 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:54:28.956282    5092 out.go:352] Setting JSON to false
	I0916 10:54:28.972645    5092 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3232,"bootTime":1726506036,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:54:28.972710    5092 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:54:28.979022    5092 out.go:177] * [flannel-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:54:28.986856    5092 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:54:28.986943    5092 notify.go:220] Checking for updates...
	I0916 10:54:28.994800    5092 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:54:28.997777    5092 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:54:29.000786    5092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:54:29.004928    5092 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:54:29.007799    5092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:54:29.011083    5092 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:54:29.011144    5092 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:54:29.011190    5092 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:54:29.015798    5092 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:54:29.022815    5092 start.go:297] selected driver: qemu2
	I0916 10:54:29.022822    5092 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:54:29.022830    5092 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:54:29.025122    5092 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:54:29.027829    5092 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:54:29.029055    5092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:54:29.029073    5092 cni.go:84] Creating CNI manager for "flannel"
	I0916 10:54:29.029076    5092 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0916 10:54:29.029112    5092 start.go:340] cluster config:
	{Name:flannel-900000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:54:29.032740    5092 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:54:29.040840    5092 out.go:177] * Starting "flannel-900000" primary control-plane node in "flannel-900000" cluster
	I0916 10:54:29.044799    5092 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:54:29.044815    5092 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:54:29.044827    5092 cache.go:56] Caching tarball of preloaded images
	I0916 10:54:29.044907    5092 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:54:29.044920    5092 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:54:29.044978    5092 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/flannel-900000/config.json ...
	I0916 10:54:29.044990    5092 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/flannel-900000/config.json: {Name:mk371d881d5be1282119d747dc924fabe38faed5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:54:29.045225    5092 start.go:360] acquireMachinesLock for flannel-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:54:29.045259    5092 start.go:364] duration metric: took 28.209µs to acquireMachinesLock for "flannel-900000"
	I0916 10:54:29.045269    5092 start.go:93] Provisioning new machine with config: &{Name:flannel-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:54:29.045292    5092 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:54:29.052809    5092 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:54:29.070892    5092 start.go:159] libmachine.API.Create for "flannel-900000" (driver="qemu2")
	I0916 10:54:29.070921    5092 client.go:168] LocalClient.Create starting
	I0916 10:54:29.070988    5092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:54:29.071018    5092 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:29.071027    5092 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:29.071069    5092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:54:29.071096    5092 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:29.071106    5092 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:29.071459    5092 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:54:29.235316    5092 main.go:141] libmachine: Creating SSH key...
	I0916 10:54:29.380131    5092 main.go:141] libmachine: Creating Disk image...
	I0916 10:54:29.380138    5092 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:54:29.380324    5092 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/disk.qcow2
	I0916 10:54:29.389968    5092 main.go:141] libmachine: STDOUT: 
	I0916 10:54:29.389986    5092 main.go:141] libmachine: STDERR: 
	I0916 10:54:29.390056    5092 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/disk.qcow2 +20000M
	I0916 10:54:29.397949    5092 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:54:29.397968    5092 main.go:141] libmachine: STDERR: 
	I0916 10:54:29.397984    5092 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/disk.qcow2
	I0916 10:54:29.397988    5092 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:54:29.398002    5092 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:54:29.398036    5092 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:22:7a:c7:75:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/disk.qcow2
	I0916 10:54:29.399700    5092 main.go:141] libmachine: STDOUT: 
	I0916 10:54:29.399726    5092 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:54:29.399747    5092 client.go:171] duration metric: took 328.829209ms to LocalClient.Create
	I0916 10:54:31.401941    5092 start.go:128] duration metric: took 2.356682917s to createHost
	I0916 10:54:31.402024    5092 start.go:83] releasing machines lock for "flannel-900000", held for 2.356826041s
	W0916 10:54:31.402077    5092 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:31.413453    5092 out.go:177] * Deleting "flannel-900000" in qemu2 ...
	W0916 10:54:31.446021    5092 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:31.446048    5092 start.go:729] Will try again in 5 seconds ...
	I0916 10:54:36.448174    5092 start.go:360] acquireMachinesLock for flannel-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:54:36.448817    5092 start.go:364] duration metric: took 472.25µs to acquireMachinesLock for "flannel-900000"
	I0916 10:54:36.448928    5092 start.go:93] Provisioning new machine with config: &{Name:flannel-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:54:36.449241    5092 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:54:36.457656    5092 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:54:36.508238    5092 start.go:159] libmachine.API.Create for "flannel-900000" (driver="qemu2")
	I0916 10:54:36.508290    5092 client.go:168] LocalClient.Create starting
	I0916 10:54:36.508407    5092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:54:36.508484    5092 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:36.508500    5092 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:36.508564    5092 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:54:36.508612    5092 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:36.508633    5092 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:36.509187    5092 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:54:36.680325    5092 main.go:141] libmachine: Creating SSH key...
	I0916 10:54:36.732499    5092 main.go:141] libmachine: Creating Disk image...
	I0916 10:54:36.732505    5092 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:54:36.732675    5092 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/disk.qcow2
	I0916 10:54:36.741783    5092 main.go:141] libmachine: STDOUT: 
	I0916 10:54:36.741798    5092 main.go:141] libmachine: STDERR: 
	I0916 10:54:36.741851    5092 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/disk.qcow2 +20000M
	I0916 10:54:36.749998    5092 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:54:36.750011    5092 main.go:141] libmachine: STDERR: 
	I0916 10:54:36.750030    5092 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/disk.qcow2
	I0916 10:54:36.750036    5092 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:54:36.750047    5092 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:54:36.750075    5092 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:38:73:a0:b8:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/flannel-900000/disk.qcow2
	I0916 10:54:36.751754    5092 main.go:141] libmachine: STDOUT: 
	I0916 10:54:36.751767    5092 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:54:36.751779    5092 client.go:171] duration metric: took 243.490417ms to LocalClient.Create
	I0916 10:54:38.753827    5092 start.go:128] duration metric: took 2.30463625s to createHost
	I0916 10:54:38.753882    5092 start.go:83] releasing machines lock for "flannel-900000", held for 2.305082208s
	W0916 10:54:38.754052    5092 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:38.762575    5092 out.go:201] 
	W0916 10:54:38.770503    5092 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:54:38.770518    5092 out.go:270] * 
	* 
	W0916 10:54:38.771736    5092 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:54:38.787492    5092 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.831435667s)

                                                
                                                
-- stdout --
	* [bridge-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-900000" primary control-plane node in "bridge-900000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-900000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:54:41.179003    5209 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:54:41.179131    5209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:54:41.179133    5209 out.go:358] Setting ErrFile to fd 2...
	I0916 10:54:41.179136    5209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:54:41.179249    5209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:54:41.180347    5209 out.go:352] Setting JSON to false
	I0916 10:54:41.197318    5209 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3245,"bootTime":1726506036,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:54:41.197400    5209 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:54:41.203005    5209 out.go:177] * [bridge-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:54:41.210960    5209 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:54:41.210987    5209 notify.go:220] Checking for updates...
	I0916 10:54:41.217977    5209 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:54:41.220964    5209 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:54:41.223942    5209 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:54:41.226927    5209 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:54:41.229964    5209 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:54:41.233333    5209 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:54:41.233395    5209 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:54:41.233440    5209 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:54:41.238036    5209 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:54:41.244979    5209 start.go:297] selected driver: qemu2
	I0916 10:54:41.244990    5209 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:54:41.244999    5209 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:54:41.247241    5209 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:54:41.249856    5209 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:54:41.252980    5209 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:54:41.252994    5209 cni.go:84] Creating CNI manager for "bridge"
	I0916 10:54:41.252997    5209 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:54:41.253027    5209 start.go:340] cluster config:
	{Name:bridge-900000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:54:41.256474    5209 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:54:41.275581    5209 out.go:177] * Starting "bridge-900000" primary control-plane node in "bridge-900000" cluster
	I0916 10:54:41.280031    5209 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:54:41.280050    5209 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:54:41.280062    5209 cache.go:56] Caching tarball of preloaded images
	I0916 10:54:41.280124    5209 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:54:41.280129    5209 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:54:41.280194    5209 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/bridge-900000/config.json ...
	I0916 10:54:41.280210    5209 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/bridge-900000/config.json: {Name:mk30a102abe16a2be122ecb3c6f645a5785af840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:54:41.280697    5209 start.go:360] acquireMachinesLock for bridge-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:54:41.280732    5209 start.go:364] duration metric: took 28.958µs to acquireMachinesLock for "bridge-900000"
	I0916 10:54:41.280742    5209 start.go:93] Provisioning new machine with config: &{Name:bridge-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:54:41.280770    5209 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:54:41.290929    5209 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:54:41.307114    5209 start.go:159] libmachine.API.Create for "bridge-900000" (driver="qemu2")
	I0916 10:54:41.307139    5209 client.go:168] LocalClient.Create starting
	I0916 10:54:41.307200    5209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:54:41.307230    5209 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:41.307239    5209 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:41.307276    5209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:54:41.307299    5209 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:41.307305    5209 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:41.307637    5209 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:54:41.492270    5209 main.go:141] libmachine: Creating SSH key...
	I0916 10:54:41.593795    5209 main.go:141] libmachine: Creating Disk image...
	I0916 10:54:41.593803    5209 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:54:41.593978    5209 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/disk.qcow2
	I0916 10:54:41.603733    5209 main.go:141] libmachine: STDOUT: 
	I0916 10:54:41.603746    5209 main.go:141] libmachine: STDERR: 
	I0916 10:54:41.603800    5209 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/disk.qcow2 +20000M
	I0916 10:54:41.612033    5209 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:54:41.612048    5209 main.go:141] libmachine: STDERR: 
	I0916 10:54:41.612067    5209 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/disk.qcow2
	I0916 10:54:41.612071    5209 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:54:41.612083    5209 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:54:41.612117    5209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:8e:94:ee:08:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/disk.qcow2
	I0916 10:54:41.613859    5209 main.go:141] libmachine: STDOUT: 
	I0916 10:54:41.613872    5209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:54:41.613892    5209 client.go:171] duration metric: took 306.756375ms to LocalClient.Create
	I0916 10:54:43.615999    5209 start.go:128] duration metric: took 2.335225541s to createHost
	I0916 10:54:43.616042    5209 start.go:83] releasing machines lock for "bridge-900000", held for 2.335368583s
	W0916 10:54:43.616085    5209 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:43.625278    5209 out.go:177] * Deleting "bridge-900000" in qemu2 ...
	W0916 10:54:43.648383    5209 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:43.648396    5209 start.go:729] Will try again in 5 seconds ...
	I0916 10:54:48.650378    5209 start.go:360] acquireMachinesLock for bridge-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:54:48.650619    5209 start.go:364] duration metric: took 186.542µs to acquireMachinesLock for "bridge-900000"
	I0916 10:54:48.650687    5209 start.go:93] Provisioning new machine with config: &{Name:bridge-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:54:48.650763    5209 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:54:48.669095    5209 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:54:48.692084    5209 start.go:159] libmachine.API.Create for "bridge-900000" (driver="qemu2")
	I0916 10:54:48.692123    5209 client.go:168] LocalClient.Create starting
	I0916 10:54:48.692193    5209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:54:48.692244    5209 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:48.692254    5209 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:48.692294    5209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:54:48.692320    5209 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:48.692328    5209 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:48.692808    5209 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:54:48.857619    5209 main.go:141] libmachine: Creating SSH key...
	I0916 10:54:48.925308    5209 main.go:141] libmachine: Creating Disk image...
	I0916 10:54:48.925317    5209 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:54:48.925503    5209 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/disk.qcow2
	I0916 10:54:48.934764    5209 main.go:141] libmachine: STDOUT: 
	I0916 10:54:48.934786    5209 main.go:141] libmachine: STDERR: 
	I0916 10:54:48.934844    5209 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/disk.qcow2 +20000M
	I0916 10:54:48.942778    5209 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:54:48.942794    5209 main.go:141] libmachine: STDERR: 
	I0916 10:54:48.942806    5209 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/disk.qcow2
	I0916 10:54:48.942810    5209 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:54:48.942827    5209 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:54:48.942853    5209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:2f:63:2f:3a:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/bridge-900000/disk.qcow2
	I0916 10:54:48.944606    5209 main.go:141] libmachine: STDOUT: 
	I0916 10:54:48.944622    5209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:54:48.944640    5209 client.go:171] duration metric: took 252.513208ms to LocalClient.Create
	I0916 10:54:50.946104    5209 start.go:128] duration metric: took 2.295396666s to createHost
	I0916 10:54:50.946117    5209 start.go:83] releasing machines lock for "bridge-900000", held for 2.295558542s
	W0916 10:54:50.946191    5209 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:50.959928    5209 out.go:201] 
	W0916 10:54:50.962990    5209 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:54:50.963005    5209 out.go:270] * 
	* 
	W0916 10:54:50.963455    5209 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:54:50.972024    5209 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-900000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.883389625s)

                                                
                                                
-- stdout --
	* [kubenet-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-900000" primary control-plane node in "kubenet-900000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-900000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:54:53.171151    5321 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:54:53.171291    5321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:54:53.171295    5321 out.go:358] Setting ErrFile to fd 2...
	I0916 10:54:53.171297    5321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:54:53.171444    5321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:54:53.172565    5321 out.go:352] Setting JSON to false
	I0916 10:54:53.189025    5321 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3257,"bootTime":1726506036,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:54:53.189089    5321 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:54:53.195553    5321 out.go:177] * [kubenet-900000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:54:53.203588    5321 notify.go:220] Checking for updates...
	I0916 10:54:53.208520    5321 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:54:53.215518    5321 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:54:53.223537    5321 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:54:53.226391    5321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:54:53.230533    5321 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:54:53.234503    5321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:54:53.238817    5321 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:54:53.238880    5321 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:54:53.238921    5321 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:54:53.242580    5321 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:54:53.249423    5321 start.go:297] selected driver: qemu2
	I0916 10:54:53.249428    5321 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:54:53.249433    5321 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:54:53.251756    5321 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:54:53.255527    5321 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:54:53.258575    5321 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:54:53.258599    5321 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0916 10:54:53.258627    5321 start.go:340] cluster config:
	{Name:kubenet-900000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:54:53.262353    5321 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:54:53.269543    5321 out.go:177] * Starting "kubenet-900000" primary control-plane node in "kubenet-900000" cluster
	I0916 10:54:53.273356    5321 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:54:53.273371    5321 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:54:53.273381    5321 cache.go:56] Caching tarball of preloaded images
	I0916 10:54:53.273455    5321 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:54:53.273461    5321 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:54:53.273526    5321 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/kubenet-900000/config.json ...
	I0916 10:54:53.273536    5321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/kubenet-900000/config.json: {Name:mk37dd4ac0a312d2b9534400c0fcdf3edaf39e30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:54:53.273761    5321 start.go:360] acquireMachinesLock for kubenet-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:54:53.273795    5321 start.go:364] duration metric: took 27.709µs to acquireMachinesLock for "kubenet-900000"
	I0916 10:54:53.273805    5321 start.go:93] Provisioning new machine with config: &{Name:kubenet-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:54:53.273841    5321 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:54:53.282324    5321 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:54:53.299255    5321 start.go:159] libmachine.API.Create for "kubenet-900000" (driver="qemu2")
	I0916 10:54:53.299290    5321 client.go:168] LocalClient.Create starting
	I0916 10:54:53.299361    5321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:54:53.299390    5321 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:53.299402    5321 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:53.299440    5321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:54:53.299463    5321 main.go:141] libmachine: Decoding PEM data...
	I0916 10:54:53.299475    5321 main.go:141] libmachine: Parsing certificate...
	I0916 10:54:53.299833    5321 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:54:53.464637    5321 main.go:141] libmachine: Creating SSH key...
	I0916 10:54:53.617533    5321 main.go:141] libmachine: Creating Disk image...
	I0916 10:54:53.617542    5321 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:54:53.617749    5321 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/disk.qcow2
	I0916 10:54:53.627143    5321 main.go:141] libmachine: STDOUT: 
	I0916 10:54:53.627168    5321 main.go:141] libmachine: STDERR: 
	I0916 10:54:53.627231    5321 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/disk.qcow2 +20000M
	I0916 10:54:53.635111    5321 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:54:53.635128    5321 main.go:141] libmachine: STDERR: 
	I0916 10:54:53.635139    5321 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/disk.qcow2
	I0916 10:54:53.635145    5321 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:54:53.635158    5321 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:54:53.635187    5321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:c2:60:0a:db:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/disk.qcow2
	I0916 10:54:53.636854    5321 main.go:141] libmachine: STDOUT: 
	I0916 10:54:53.636867    5321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:54:53.636887    5321 client.go:171] duration metric: took 337.599875ms to LocalClient.Create
	I0916 10:54:55.638966    5321 start.go:128] duration metric: took 2.365177416s to createHost
	I0916 10:54:55.638996    5321 start.go:83] releasing machines lock for "kubenet-900000", held for 2.365267625s
	W0916 10:54:55.639012    5321 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:55.649745    5321 out.go:177] * Deleting "kubenet-900000" in qemu2 ...
	W0916 10:54:55.666741    5321 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:54:55.666760    5321 start.go:729] Will try again in 5 seconds ...
	I0916 10:55:00.668921    5321 start.go:360] acquireMachinesLock for kubenet-900000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:00.669458    5321 start.go:364] duration metric: took 456.125µs to acquireMachinesLock for "kubenet-900000"
	I0916 10:55:00.669561    5321 start.go:93] Provisioning new machine with config: &{Name:kubenet-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:55:00.669780    5321 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:55:00.676980    5321 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0916 10:55:00.722230    5321 start.go:159] libmachine.API.Create for "kubenet-900000" (driver="qemu2")
	I0916 10:55:00.722284    5321 client.go:168] LocalClient.Create starting
	I0916 10:55:00.722403    5321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:55:00.722483    5321 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:00.722502    5321 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:00.722574    5321 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:55:00.722619    5321 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:00.722633    5321 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:00.723295    5321 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:55:00.904251    5321 main.go:141] libmachine: Creating SSH key...
	I0916 10:55:00.958497    5321 main.go:141] libmachine: Creating Disk image...
	I0916 10:55:00.958503    5321 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:55:00.958703    5321 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/disk.qcow2
	I0916 10:55:00.968213    5321 main.go:141] libmachine: STDOUT: 
	I0916 10:55:00.968229    5321 main.go:141] libmachine: STDERR: 
	I0916 10:55:00.968294    5321 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/disk.qcow2 +20000M
	I0916 10:55:00.976307    5321 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:55:00.976322    5321 main.go:141] libmachine: STDERR: 
	I0916 10:55:00.976335    5321 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/disk.qcow2
	I0916 10:55:00.976340    5321 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:55:00.976349    5321 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:00.976379    5321 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:dd:f7:d5:67:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/kubenet-900000/disk.qcow2
	I0916 10:55:00.978109    5321 main.go:141] libmachine: STDOUT: 
	I0916 10:55:00.978126    5321 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:00.978141    5321 client.go:171] duration metric: took 255.859083ms to LocalClient.Create
	I0916 10:55:02.980289    5321 start.go:128] duration metric: took 2.310538042s to createHost
	I0916 10:55:02.980370    5321 start.go:83] releasing machines lock for "kubenet-900000", held for 2.310956167s
	W0916 10:55:02.980792    5321 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:02.990545    5321 out.go:201] 
	W0916 10:55:03.000862    5321 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:55:03.000924    5321 out.go:270] * 
	* 
	W0916 10:55:03.003645    5321 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:55:03.015188    5321 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-424000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
E0916 10:55:07.144174    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-424000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.11872975s)

                                                
                                                
-- stdout --
	* [old-k8s-version-424000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-424000" primary control-plane node in "old-k8s-version-424000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-424000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:05.277063    5437 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:05.277221    5437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:05.277224    5437 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:05.277227    5437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:05.277371    5437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:55:05.278463    5437 out.go:352] Setting JSON to false
	I0916 10:55:05.295043    5437 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3269,"bootTime":1726506036,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:55:05.295111    5437 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:55:05.301615    5437 out.go:177] * [old-k8s-version-424000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:55:05.310914    5437 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:55:05.310959    5437 notify.go:220] Checking for updates...
	I0916 10:55:05.319272    5437 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:55:05.323322    5437 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:55:05.326559    5437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:55:05.330062    5437 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:55:05.333501    5437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:55:05.337017    5437 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:55:05.337088    5437 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:55:05.337127    5437 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:55:05.341510    5437 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:55:05.348536    5437 start.go:297] selected driver: qemu2
	I0916 10:55:05.348542    5437 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:55:05.348549    5437 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:55:05.350856    5437 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:55:05.354512    5437 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:55:05.357524    5437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:55:05.357540    5437 cni.go:84] Creating CNI manager for ""
	I0916 10:55:05.357563    5437 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 10:55:05.357594    5437 start.go:340] cluster config:
	{Name:old-k8s-version-424000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:05.361162    5437 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:05.367052    5437 out.go:177] * Starting "old-k8s-version-424000" primary control-plane node in "old-k8s-version-424000" cluster
	I0916 10:55:05.371458    5437 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:55:05.371471    5437 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 10:55:05.371480    5437 cache.go:56] Caching tarball of preloaded images
	I0916 10:55:05.371534    5437 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:55:05.371540    5437 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0916 10:55:05.371609    5437 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/old-k8s-version-424000/config.json ...
	I0916 10:55:05.371619    5437 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/old-k8s-version-424000/config.json: {Name:mkfbbcad0ed692c7c25c7177173e6bacc7dc5dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:55:05.371928    5437 start.go:360] acquireMachinesLock for old-k8s-version-424000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:05.371961    5437 start.go:364] duration metric: took 24.167µs to acquireMachinesLock for "old-k8s-version-424000"
	I0916 10:55:05.371970    5437 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:55:05.371993    5437 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:55:05.380549    5437 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:55:05.396316    5437 start.go:159] libmachine.API.Create for "old-k8s-version-424000" (driver="qemu2")
	I0916 10:55:05.396344    5437 client.go:168] LocalClient.Create starting
	I0916 10:55:05.396403    5437 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:55:05.396434    5437 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:05.396446    5437 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:05.396489    5437 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:55:05.396512    5437 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:05.396517    5437 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:05.396884    5437 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:55:05.568690    5437 main.go:141] libmachine: Creating SSH key...
	I0916 10:55:05.662436    5437 main.go:141] libmachine: Creating Disk image...
	I0916 10:55:05.662443    5437 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:55:05.662642    5437 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I0916 10:55:05.672083    5437 main.go:141] libmachine: STDOUT: 
	I0916 10:55:05.672098    5437 main.go:141] libmachine: STDERR: 
	I0916 10:55:05.672159    5437 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2 +20000M
	I0916 10:55:05.680302    5437 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:55:05.680316    5437 main.go:141] libmachine: STDERR: 
	I0916 10:55:05.680330    5437 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I0916 10:55:05.680334    5437 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:55:05.680345    5437 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:05.680370    5437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:a1:42:f4:e7:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I0916 10:55:05.682023    5437 main.go:141] libmachine: STDOUT: 
	I0916 10:55:05.682036    5437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:05.682058    5437 client.go:171] duration metric: took 285.717583ms to LocalClient.Create
	I0916 10:55:07.684194    5437 start.go:128] duration metric: took 2.31224575s to createHost
	I0916 10:55:07.684283    5437 start.go:83] releasing machines lock for "old-k8s-version-424000", held for 2.312382875s
	W0916 10:55:07.684338    5437 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:07.696152    5437 out.go:177] * Deleting "old-k8s-version-424000" in qemu2 ...
	W0916 10:55:07.727978    5437 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:07.728020    5437 start.go:729] Will try again in 5 seconds ...
	I0916 10:55:12.730005    5437 start.go:360] acquireMachinesLock for old-k8s-version-424000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:12.730247    5437 start.go:364] duration metric: took 188.375µs to acquireMachinesLock for "old-k8s-version-424000"
	I0916 10:55:12.730304    5437 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:55:12.730409    5437 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:55:12.738324    5437 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:55:12.768054    5437 start.go:159] libmachine.API.Create for "old-k8s-version-424000" (driver="qemu2")
	I0916 10:55:12.768113    5437 client.go:168] LocalClient.Create starting
	I0916 10:55:12.768189    5437 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:55:12.768232    5437 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:12.768244    5437 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:12.768296    5437 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:55:12.768331    5437 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:12.768345    5437 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:12.768799    5437 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:55:13.057265    5437 main.go:141] libmachine: Creating SSH key...
	I0916 10:55:13.302421    5437 main.go:141] libmachine: Creating Disk image...
	I0916 10:55:13.302433    5437 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:55:13.302687    5437 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I0916 10:55:13.312696    5437 main.go:141] libmachine: STDOUT: 
	I0916 10:55:13.312723    5437 main.go:141] libmachine: STDERR: 
	I0916 10:55:13.312794    5437 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2 +20000M
	I0916 10:55:13.321172    5437 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:55:13.321187    5437 main.go:141] libmachine: STDERR: 
	I0916 10:55:13.321197    5437 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I0916 10:55:13.321201    5437 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:55:13.321213    5437 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:13.321247    5437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:dd:9c:d7:b5:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I0916 10:55:13.322936    5437 main.go:141] libmachine: STDOUT: 
	I0916 10:55:13.322957    5437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:13.322978    5437 client.go:171] duration metric: took 554.876666ms to LocalClient.Create
	I0916 10:55:15.324570    5437 start.go:128] duration metric: took 2.594207s to createHost
	I0916 10:55:15.324672    5437 start.go:83] releasing machines lock for "old-k8s-version-424000", held for 2.59448875s
	W0916 10:55:15.324974    5437 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-424000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-424000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:15.337239    5437 out.go:201] 
	W0916 10:55:15.341379    5437 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:55:15.341421    5437 out.go:270] * 
	* 
	W0916 10:55:15.344234    5437 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:55:15.354321    5437 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-424000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (70.757334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-424000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-424000 create -f testdata/busybox.yaml: exit status 1 (32.352917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-424000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-424000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (31.151083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (33.939625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-424000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-424000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-424000 describe deploy/metrics-server -n kube-system: exit status 1 (27.870791ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-424000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-424000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (35.78ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-424000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-424000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.202824s)

                                                
                                                
-- stdout --
	* [old-k8s-version-424000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-424000" primary control-plane node in "old-k8s-version-424000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-424000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-424000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:17.863694    5483 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:17.863840    5483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:17.863845    5483 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:17.863847    5483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:17.863984    5483 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:55:17.865072    5483 out.go:352] Setting JSON to false
	I0916 10:55:17.883170    5483 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3281,"bootTime":1726506036,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:55:17.883240    5483 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:55:17.889299    5483 out.go:177] * [old-k8s-version-424000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:55:17.897166    5483 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:55:17.897224    5483 notify.go:220] Checking for updates...
	I0916 10:55:17.906001    5483 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:55:17.909737    5483 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:55:17.913172    5483 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:55:17.916449    5483 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:55:17.920189    5483 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:55:17.923359    5483 config.go:182] Loaded profile config "old-k8s-version-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0916 10:55:17.924858    5483 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 10:55:17.928121    5483 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:55:17.932657    5483 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:55:17.944125    5483 start.go:297] selected driver: qemu2
	I0916 10:55:17.944130    5483 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-424000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:17.944177    5483 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:55:17.946585    5483 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:55:17.946614    5483 cni.go:84] Creating CNI manager for ""
	I0916 10:55:17.946635    5483 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 10:55:17.946661    5483 start.go:340] cluster config:
	{Name:old-k8s-version-424000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-424000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:17.950446    5483 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:17.959111    5483 out.go:177] * Starting "old-k8s-version-424000" primary control-plane node in "old-k8s-version-424000" cluster
	I0916 10:55:17.963703    5483 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:55:17.963716    5483 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 10:55:17.963728    5483 cache.go:56] Caching tarball of preloaded images
	I0916 10:55:17.963780    5483 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:55:17.963785    5483 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0916 10:55:17.963834    5483 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/old-k8s-version-424000/config.json ...
	I0916 10:55:17.964411    5483 start.go:360] acquireMachinesLock for old-k8s-version-424000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:17.964438    5483 start.go:364] duration metric: took 20.917µs to acquireMachinesLock for "old-k8s-version-424000"
	I0916 10:55:17.964446    5483 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:55:17.964451    5483 fix.go:54] fixHost starting: 
	I0916 10:55:17.964573    5483 fix.go:112] recreateIfNeeded on old-k8s-version-424000: state=Stopped err=<nil>
	W0916 10:55:17.964580    5483 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:55:17.968072    5483 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-424000" ...
	I0916 10:55:17.976140    5483 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:17.976172    5483 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:dd:9c:d7:b5:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I0916 10:55:17.978047    5483 main.go:141] libmachine: STDOUT: 
	I0916 10:55:17.978062    5483 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:17.978088    5483 fix.go:56] duration metric: took 13.637083ms for fixHost
	I0916 10:55:17.978094    5483 start.go:83] releasing machines lock for "old-k8s-version-424000", held for 13.652167ms
	W0916 10:55:17.978099    5483 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:55:17.978129    5483 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:17.978133    5483 start.go:729] Will try again in 5 seconds ...
	I0916 10:55:22.980113    5483 start.go:360] acquireMachinesLock for old-k8s-version-424000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:22.980574    5483 start.go:364] duration metric: took 334.084µs to acquireMachinesLock for "old-k8s-version-424000"
	I0916 10:55:22.980691    5483 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:55:22.980706    5483 fix.go:54] fixHost starting: 
	I0916 10:55:22.981212    5483 fix.go:112] recreateIfNeeded on old-k8s-version-424000: state=Stopped err=<nil>
	W0916 10:55:22.981230    5483 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:55:22.990623    5483 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-424000" ...
	I0916 10:55:22.994621    5483 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:22.994790    5483 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:dd:9c:d7:b5:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/old-k8s-version-424000/disk.qcow2
	I0916 10:55:23.001945    5483 main.go:141] libmachine: STDOUT: 
	I0916 10:55:23.002012    5483 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:23.002085    5483 fix.go:56] duration metric: took 21.380583ms for fixHost
	I0916 10:55:23.002101    5483 start.go:83] releasing machines lock for "old-k8s-version-424000", held for 21.493417ms
	W0916 10:55:23.002255    5483 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-424000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-424000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:23.009587    5483 out.go:201] 
	W0916 10:55:23.013613    5483 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:55:23.013630    5483 out.go:270] * 
	* 
	W0916 10:55:23.015460    5483 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:55:23.023648    5483 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-424000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (62.075208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-424000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (31.328708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-424000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-424000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-424000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.0365ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-424000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-424000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (29.470958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-424000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (30.173708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-424000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-424000 --alsologtostderr -v=1: exit status 83 (43.811292ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-424000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-424000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:23.289281    5505 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:23.290393    5505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:23.290398    5505 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:23.290400    5505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:23.290589    5505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:55:23.290798    5505 out.go:352] Setting JSON to false
	I0916 10:55:23.290803    5505 mustload.go:65] Loading cluster: old-k8s-version-424000
	I0916 10:55:23.291003    5505 config.go:182] Loaded profile config "old-k8s-version-424000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0916 10:55:23.295524    5505 out.go:177] * The control-plane node old-k8s-version-424000 host is not running: state=Stopped
	I0916 10:55:23.298426    5505 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-424000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-424000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (28.868292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (29.953083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-424000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-117000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-117000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.892983459s)

                                                
                                                
-- stdout --
	* [no-preload-117000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-117000" primary control-plane node in "no-preload-117000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-117000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:23.610828    5522 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:23.610954    5522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:23.610957    5522 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:23.610960    5522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:23.611072    5522 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:55:23.612215    5522 out.go:352] Setting JSON to false
	I0916 10:55:23.628843    5522 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3287,"bootTime":1726506036,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:55:23.628912    5522 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:55:23.633897    5522 out.go:177] * [no-preload-117000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:55:23.641052    5522 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:55:23.641078    5522 notify.go:220] Checking for updates...
	I0916 10:55:23.647055    5522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:55:23.650031    5522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:55:23.651626    5522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:55:23.655078    5522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:55:23.658053    5522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:55:23.661540    5522 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:55:23.661605    5522 config.go:182] Loaded profile config "stopped-upgrade-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0916 10:55:23.661656    5522 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:55:23.666047    5522 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:55:23.673035    5522 start.go:297] selected driver: qemu2
	I0916 10:55:23.673042    5522 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:55:23.673048    5522 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:55:23.675307    5522 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:55:23.679039    5522 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:55:23.682158    5522 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:55:23.682178    5522 cni.go:84] Creating CNI manager for ""
	I0916 10:55:23.682197    5522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:55:23.682202    5522 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:55:23.682230    5522 start.go:340] cluster config:
	{Name:no-preload-117000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-117000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:23.685715    5522 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:23.693197    5522 out.go:177] * Starting "no-preload-117000" primary control-plane node in "no-preload-117000" cluster
	I0916 10:55:23.697077    5522 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:55:23.697137    5522 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/no-preload-117000/config.json ...
	I0916 10:55:23.697151    5522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/no-preload-117000/config.json: {Name:mk87441f535e28d70b8194164ad73ce8d918024c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:55:23.697153    5522 cache.go:107] acquiring lock: {Name:mk9957ee1584da5e9c74daf97ce53b8c1c1ab620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:23.697153    5522 cache.go:107] acquiring lock: {Name:mk0011622b8533efc9bbc0409e95a3ba3f2751c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:23.697207    5522 cache.go:115] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 10:55:23.697212    5522 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 63.75µs
	I0916 10:55:23.697218    5522 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 10:55:23.697224    5522 cache.go:107] acquiring lock: {Name:mk7ca50e9a6faf91f161fa2479069842d39b8c06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:23.697227    5522 cache.go:107] acquiring lock: {Name:mk8d13c8d24bf5217489bcda242a85fdd2c04abd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:23.697280    5522 cache.go:107] acquiring lock: {Name:mked4130933966c32d30ee6859bf85da1c2b3278 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:23.697333    5522 cache.go:107] acquiring lock: {Name:mkcead36d9763e160ef6872ec00decd1072203f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:23.697406    5522 cache.go:107] acquiring lock: {Name:mk4f14092a189f6aa49594a0772c63b9accd18cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:23.697467    5522 cache.go:107] acquiring lock: {Name:mkc2583cf584f80a0feb6dc97b45aba9bfa85a8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:23.697602    5522 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 10:55:23.697603    5522 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 10:55:23.697604    5522 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 10:55:23.697714    5522 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 10:55:23.697727    5522 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 10:55:23.697739    5522 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 10:55:23.697805    5522 start.go:360] acquireMachinesLock for no-preload-117000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:23.697841    5522 start.go:364] duration metric: took 30.208µs to acquireMachinesLock for "no-preload-117000"
	I0916 10:55:23.697852    5522 start.go:93] Provisioning new machine with config: &{Name:no-preload-117000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-117000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:55:23.697875    5522 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:55:23.697965    5522 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 10:55:23.705905    5522 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:55:23.710476    5522 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 10:55:23.711388    5522 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 10:55:23.711553    5522 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 10:55:23.712571    5522 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 10:55:23.715070    5522 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 10:55:23.715138    5522 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 10:55:23.715167    5522 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 10:55:23.723014    5522 start.go:159] libmachine.API.Create for "no-preload-117000" (driver="qemu2")
	I0916 10:55:23.723039    5522 client.go:168] LocalClient.Create starting
	I0916 10:55:23.723131    5522 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:55:23.723171    5522 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:23.723196    5522 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:23.723240    5522 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:55:23.723263    5522 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:23.723273    5522 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:23.723661    5522 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:55:23.893279    5522 main.go:141] libmachine: Creating SSH key...
	I0916 10:55:24.046170    5522 main.go:141] libmachine: Creating Disk image...
	I0916 10:55:24.046188    5522 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:55:24.046378    5522 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2
	I0916 10:55:24.055873    5522 main.go:141] libmachine: STDOUT: 
	I0916 10:55:24.055908    5522 main.go:141] libmachine: STDERR: 
	I0916 10:55:24.055954    5522 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2 +20000M
	I0916 10:55:24.064311    5522 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:55:24.064323    5522 main.go:141] libmachine: STDERR: 
	I0916 10:55:24.064336    5522 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2
	I0916 10:55:24.064341    5522 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:55:24.064356    5522 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:24.064397    5522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:e7:81:da:0b:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2
	I0916 10:55:24.066267    5522 main.go:141] libmachine: STDOUT: 
	I0916 10:55:24.066282    5522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:24.066299    5522 client.go:171] duration metric: took 343.265042ms to LocalClient.Create
	I0916 10:55:24.088247    5522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0916 10:55:24.142051    5522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 10:55:24.150077    5522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0916 10:55:24.165669    5522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 10:55:24.183549    5522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 10:55:24.190200    5522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 10:55:24.210042    5522 cache.go:162] opening:  /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 10:55:24.315593    5522 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0916 10:55:24.315611    5522 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 618.40525ms
	I0916 10:55:24.315619    5522 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0916 10:55:26.066445    5522 start.go:128] duration metric: took 2.368622625s to createHost
	I0916 10:55:26.066474    5522 start.go:83] releasing machines lock for "no-preload-117000", held for 2.368697833s
	W0916 10:55:26.066509    5522 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:26.082686    5522 out.go:177] * Deleting "no-preload-117000" in qemu2 ...
	W0916 10:55:26.109623    5522 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:26.109638    5522 start.go:729] Will try again in 5 seconds ...
	I0916 10:55:27.422519    5522 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 10:55:27.422564    5522 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 3.725454542s
	I0916 10:55:27.422587    5522 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 10:55:27.496768    5522 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 10:55:27.496795    5522 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.79954975s
	I0916 10:55:27.496808    5522 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 10:55:27.814326    5522 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 10:55:27.814353    5522 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.117122083s
	I0916 10:55:27.814366    5522 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 10:55:28.476487    5522 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 10:55:28.476537    5522 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.7794125s
	I0916 10:55:28.476562    5522 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 10:55:28.947207    5522 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 10:55:28.947289    5522 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 5.250299458s
	I0916 10:55:28.947318    5522 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 10:55:31.109684    5522 start.go:360] acquireMachinesLock for no-preload-117000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:31.109952    5522 start.go:364] duration metric: took 226.666µs to acquireMachinesLock for "no-preload-117000"
	I0916 10:55:31.110029    5522 start.go:93] Provisioning new machine with config: &{Name:no-preload-117000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-117000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:55:31.110154    5522 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:55:31.119607    5522 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:55:31.159209    5522 start.go:159] libmachine.API.Create for "no-preload-117000" (driver="qemu2")
	I0916 10:55:31.159252    5522 client.go:168] LocalClient.Create starting
	I0916 10:55:31.159368    5522 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:55:31.159428    5522 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:31.159448    5522 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:31.159508    5522 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:55:31.159548    5522 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:31.159564    5522 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:31.159975    5522 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:55:31.368985    5522 main.go:141] libmachine: Creating SSH key...
	I0916 10:55:31.415857    5522 main.go:141] libmachine: Creating Disk image...
	I0916 10:55:31.415863    5522 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:55:31.416054    5522 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2
	I0916 10:55:31.425806    5522 main.go:141] libmachine: STDOUT: 
	I0916 10:55:31.425827    5522 main.go:141] libmachine: STDERR: 
	I0916 10:55:31.425889    5522 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2 +20000M
	I0916 10:55:31.434487    5522 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:55:31.434508    5522 main.go:141] libmachine: STDERR: 
	I0916 10:55:31.434520    5522 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2
	I0916 10:55:31.434532    5522 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:55:31.434541    5522 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:31.434580    5522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:5c:63:bf:4a:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2
	I0916 10:55:31.436308    5522 main.go:141] libmachine: STDOUT: 
	I0916 10:55:31.436321    5522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:31.436332    5522 client.go:171] duration metric: took 277.082917ms to LocalClient.Create
	I0916 10:55:32.304689    5522 cache.go:157] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 10:55:32.304749    5522 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.607701583s
	I0916 10:55:32.304777    5522 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 10:55:32.304808    5522 cache.go:87] Successfully saved all images to host disk.
	I0916 10:55:33.438487    5522 start.go:128] duration metric: took 2.328346959s to createHost
	I0916 10:55:33.438532    5522 start.go:83] releasing machines lock for "no-preload-117000", held for 2.328628625s
	W0916 10:55:33.438828    5522 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-117000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-117000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:33.449419    5522 out.go:201] 
	W0916 10:55:33.454514    5522 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:55:33.454552    5522 out.go:270] * 
	* 
	W0916 10:55:33.456383    5522 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:55:33.466387    5522 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-117000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (49.531875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-663000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-663000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (11.2721485s)

                                                
                                                
-- stdout --
	* [embed-certs-663000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-663000" primary control-plane node in "embed-certs-663000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-663000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:32.052138    5567 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:32.052451    5567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:32.052456    5567 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:32.052459    5567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:32.052628    5567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:55:32.054042    5567 out.go:352] Setting JSON to false
	I0916 10:55:32.070860    5567 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3296,"bootTime":1726506036,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:55:32.070947    5567 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:55:32.074505    5567 out.go:177] * [embed-certs-663000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:55:32.082518    5567 notify.go:220] Checking for updates...
	I0916 10:55:32.086461    5567 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:55:32.094460    5567 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:55:32.102522    5567 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:55:32.106312    5567 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:55:32.113541    5567 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:55:32.121509    5567 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:55:32.125914    5567 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:55:32.125988    5567 config.go:182] Loaded profile config "no-preload-117000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:55:32.126040    5567 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:55:32.129463    5567 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:55:32.136528    5567 start.go:297] selected driver: qemu2
	I0916 10:55:32.136533    5567 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:55:32.136543    5567 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:55:32.138941    5567 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:55:32.142446    5567 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:55:32.145572    5567 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:55:32.145592    5567 cni.go:84] Creating CNI manager for ""
	I0916 10:55:32.145618    5567 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:55:32.145622    5567 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:55:32.145651    5567 start.go:340] cluster config:
	{Name:embed-certs-663000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:32.149393    5567 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:32.157522    5567 out.go:177] * Starting "embed-certs-663000" primary control-plane node in "embed-certs-663000" cluster
	I0916 10:55:32.161524    5567 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:55:32.161542    5567 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:55:32.161556    5567 cache.go:56] Caching tarball of preloaded images
	I0916 10:55:32.161636    5567 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:55:32.161643    5567 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:55:32.161712    5567 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/embed-certs-663000/config.json ...
	I0916 10:55:32.161724    5567 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/embed-certs-663000/config.json: {Name:mk1fb0e330c38130d64ad04ecb099dcd05eea895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:55:32.162097    5567 start.go:360] acquireMachinesLock for embed-certs-663000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:33.438704    5567 start.go:364] duration metric: took 1.276571084s to acquireMachinesLock for "embed-certs-663000"
	I0916 10:55:33.438859    5567 start.go:93] Provisioning new machine with config: &{Name:embed-certs-663000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:55:33.439071    5567 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:55:33.446416    5567 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:55:33.494993    5567 start.go:159] libmachine.API.Create for "embed-certs-663000" (driver="qemu2")
	I0916 10:55:33.495043    5567 client.go:168] LocalClient.Create starting
	I0916 10:55:33.495153    5567 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:55:33.495211    5567 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:33.495230    5567 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:33.495301    5567 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:55:33.495347    5567 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:33.495366    5567 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:33.495974    5567 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:55:33.721581    5567 main.go:141] libmachine: Creating SSH key...
	I0916 10:55:33.833516    5567 main.go:141] libmachine: Creating Disk image...
	I0916 10:55:33.833522    5567 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:55:33.833690    5567 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2
	I0916 10:55:33.842687    5567 main.go:141] libmachine: STDOUT: 
	I0916 10:55:33.842704    5567 main.go:141] libmachine: STDERR: 
	I0916 10:55:33.842758    5567 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2 +20000M
	I0916 10:55:33.850631    5567 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:55:33.850646    5567 main.go:141] libmachine: STDERR: 
	I0916 10:55:33.850665    5567 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2
	I0916 10:55:33.850670    5567 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:55:33.850683    5567 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:33.850715    5567 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:62:a7:2d:db:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2
	I0916 10:55:33.852308    5567 main.go:141] libmachine: STDOUT: 
	I0916 10:55:33.852320    5567 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:33.852342    5567 client.go:171] duration metric: took 357.303334ms to LocalClient.Create
	I0916 10:55:35.854523    5567 start.go:128] duration metric: took 2.415492125s to createHost
	I0916 10:55:35.854586    5567 start.go:83] releasing machines lock for "embed-certs-663000", held for 2.415919625s
	W0916 10:55:35.854637    5567 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:35.862008    5567 out.go:177] * Deleting "embed-certs-663000" in qemu2 ...
	W0916 10:55:35.896400    5567 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:35.896427    5567 start.go:729] Will try again in 5 seconds ...
	I0916 10:55:40.896701    5567 start.go:360] acquireMachinesLock for embed-certs-663000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:40.897027    5567 start.go:364] duration metric: took 203.584µs to acquireMachinesLock for "embed-certs-663000"
	I0916 10:55:40.897117    5567 start.go:93] Provisioning new machine with config: &{Name:embed-certs-663000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:55:40.897412    5567 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:55:40.906997    5567 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:55:40.958024    5567 start.go:159] libmachine.API.Create for "embed-certs-663000" (driver="qemu2")
	I0916 10:55:40.958079    5567 client.go:168] LocalClient.Create starting
	I0916 10:55:40.958233    5567 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:55:40.958322    5567 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:40.958339    5567 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:40.958411    5567 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:55:40.958460    5567 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:40.958472    5567 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:40.959038    5567 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:55:41.136026    5567 main.go:141] libmachine: Creating SSH key...
	I0916 10:55:41.210090    5567 main.go:141] libmachine: Creating Disk image...
	I0916 10:55:41.210096    5567 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:55:41.210274    5567 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2
	I0916 10:55:41.219922    5567 main.go:141] libmachine: STDOUT: 
	I0916 10:55:41.219937    5567 main.go:141] libmachine: STDERR: 
	I0916 10:55:41.219998    5567 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2 +20000M
	I0916 10:55:41.228030    5567 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:55:41.228059    5567 main.go:141] libmachine: STDERR: 
	I0916 10:55:41.228072    5567 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2
	I0916 10:55:41.228077    5567 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:55:41.228086    5567 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:41.228117    5567 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:cc:23:9d:36:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2
	I0916 10:55:41.229799    5567 main.go:141] libmachine: STDOUT: 
	I0916 10:55:41.229814    5567 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:41.229833    5567 client.go:171] duration metric: took 271.757625ms to LocalClient.Create
	I0916 10:55:43.232221    5567 start.go:128] duration metric: took 2.334832458s to createHost
	I0916 10:55:43.232288    5567 start.go:83] releasing machines lock for "embed-certs-663000", held for 2.335310125s
	W0916 10:55:43.232560    5567 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-663000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-663000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:43.245199    5567 out.go:201] 
	W0916 10:55:43.260205    5567 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:55:43.260263    5567 out.go:270] * 
	* 
	W0916 10:55:43.262883    5567 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:55:43.275139    5567 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-663000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000: exit status 7 (64.960709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-117000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-117000 create -f testdata/busybox.yaml: exit status 1 (30.199958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-117000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-117000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (33.269708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (32.6715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-117000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-117000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-117000 describe deploy/metrics-server -n kube-system: exit status 1 (28.049916ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-117000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-117000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (30.390042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-117000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-117000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.894875875s)

                                                
                                                
-- stdout --
	* [no-preload-117000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-117000" primary control-plane node in "no-preload-117000" cluster
	* Restarting existing qemu2 VM for "no-preload-117000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-117000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:37.444399    5611 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:37.444524    5611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:37.444527    5611 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:37.444530    5611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:37.444644    5611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:55:37.445696    5611 out.go:352] Setting JSON to false
	I0916 10:55:37.461983    5611 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3301,"bootTime":1726506036,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:55:37.462056    5611 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:55:37.467271    5611 out.go:177] * [no-preload-117000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:55:37.474214    5611 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:55:37.474262    5611 notify.go:220] Checking for updates...
	I0916 10:55:37.480267    5611 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:55:37.483252    5611 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:55:37.486286    5611 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:55:37.489185    5611 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:55:37.492225    5611 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:55:37.495566    5611 config.go:182] Loaded profile config "no-preload-117000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:55:37.495842    5611 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:55:37.500098    5611 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:55:37.507247    5611 start.go:297] selected driver: qemu2
	I0916 10:55:37.507255    5611 start.go:901] validating driver "qemu2" against &{Name:no-preload-117000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-117000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:37.507322    5611 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:55:37.509786    5611 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:55:37.509813    5611 cni.go:84] Creating CNI manager for ""
	I0916 10:55:37.509839    5611 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:55:37.509858    5611 start.go:340] cluster config:
	{Name:no-preload-117000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-117000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:37.513571    5611 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:37.521170    5611 out.go:177] * Starting "no-preload-117000" primary control-plane node in "no-preload-117000" cluster
	I0916 10:55:37.525037    5611 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:55:37.525137    5611 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/no-preload-117000/config.json ...
	I0916 10:55:37.525154    5611 cache.go:107] acquiring lock: {Name:mk0011622b8533efc9bbc0409e95a3ba3f2751c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:37.525158    5611 cache.go:107] acquiring lock: {Name:mk9957ee1584da5e9c74daf97ce53b8c1c1ab620 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:37.525180    5611 cache.go:107] acquiring lock: {Name:mk7ca50e9a6faf91f161fa2479069842d39b8c06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:37.525226    5611 cache.go:115] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 10:55:37.525231    5611 cache.go:115] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0916 10:55:37.525238    5611 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 57.416µs
	I0916 10:55:37.525252    5611 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0916 10:55:37.525222    5611 cache.go:107] acquiring lock: {Name:mk8d13c8d24bf5217489bcda242a85fdd2c04abd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:37.525260    5611 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.083µs
	I0916 10:55:37.525269    5611 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 10:55:37.525264    5611 cache.go:107] acquiring lock: {Name:mkc2583cf584f80a0feb6dc97b45aba9bfa85a8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:37.525257    5611 cache.go:107] acquiring lock: {Name:mkcead36d9763e160ef6872ec00decd1072203f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:37.525316    5611 cache.go:107] acquiring lock: {Name:mk4f14092a189f6aa49594a0772c63b9accd18cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:37.525351    5611 cache.go:115] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 10:55:37.525346    5611 cache.go:107] acquiring lock: {Name:mked4130933966c32d30ee6859bf85da1c2b3278 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:37.525384    5611 cache.go:115] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 10:55:37.525389    5611 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 95.875µs
	I0916 10:55:37.525393    5611 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 10:55:37.525355    5611 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 98.25µs
	I0916 10:55:37.525398    5611 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 10:55:37.525398    5611 cache.go:115] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 10:55:37.525401    5611 cache.go:115] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 10:55:37.525440    5611 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 235.292µs
	I0916 10:55:37.525447    5611 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 10:55:37.525409    5611 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 262.125µs
	I0916 10:55:37.525450    5611 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 10:55:37.525412    5611 cache.go:115] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 10:55:37.525454    5611 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 190.375µs
	I0916 10:55:37.525457    5611 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 10:55:37.525467    5611 cache.go:115] /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 10:55:37.525474    5611 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 185.25µs
	I0916 10:55:37.525478    5611 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 10:55:37.525482    5611 cache.go:87] Successfully saved all images to host disk.
	I0916 10:55:37.525523    5611 start.go:360] acquireMachinesLock for no-preload-117000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:37.525561    5611 start.go:364] duration metric: took 31.291µs to acquireMachinesLock for "no-preload-117000"
	I0916 10:55:37.525569    5611 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:55:37.525573    5611 fix.go:54] fixHost starting: 
	I0916 10:55:37.525689    5611 fix.go:112] recreateIfNeeded on no-preload-117000: state=Stopped err=<nil>
	W0916 10:55:37.525697    5611 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:55:37.534119    5611 out.go:177] * Restarting existing qemu2 VM for "no-preload-117000" ...
	I0916 10:55:37.538218    5611 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:37.538254    5611 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:5c:63:bf:4a:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2
	I0916 10:55:37.540262    5611 main.go:141] libmachine: STDOUT: 
	I0916 10:55:37.540283    5611 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:37.540313    5611 fix.go:56] duration metric: took 14.738083ms for fixHost
	I0916 10:55:37.540317    5611 start.go:83] releasing machines lock for "no-preload-117000", held for 14.752167ms
	W0916 10:55:37.540323    5611 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:55:37.540360    5611 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:37.540364    5611 start.go:729] Will try again in 5 seconds ...
	I0916 10:55:42.542421    5611 start.go:360] acquireMachinesLock for no-preload-117000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:43.232448    5611 start.go:364] duration metric: took 689.9455ms to acquireMachinesLock for "no-preload-117000"
	I0916 10:55:43.232609    5611 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:55:43.232628    5611 fix.go:54] fixHost starting: 
	I0916 10:55:43.233363    5611 fix.go:112] recreateIfNeeded on no-preload-117000: state=Stopped err=<nil>
	W0916 10:55:43.233419    5611 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:55:43.256125    5611 out.go:177] * Restarting existing qemu2 VM for "no-preload-117000" ...
	I0916 10:55:43.263084    5611 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:43.263256    5611 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:5c:63:bf:4a:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/no-preload-117000/disk.qcow2
	I0916 10:55:43.272343    5611 main.go:141] libmachine: STDOUT: 
	I0916 10:55:43.272409    5611 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:43.272520    5611 fix.go:56] duration metric: took 39.891667ms for fixHost
	I0916 10:55:43.272570    5611 start.go:83] releasing machines lock for "no-preload-117000", held for 40.086375ms
	W0916 10:55:43.272764    5611 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-117000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-117000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:43.286063    5611 out.go:201] 
	W0916 10:55:43.290201    5611 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:55:43.290233    5611 out.go:270] * 
	* 
	W0916 10:55:43.293286    5611 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:55:43.301150    5611 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-117000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (54.802417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-663000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-663000 create -f testdata/busybox.yaml: exit status 1 (31.762042ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-663000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-663000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000: exit status 7 (31.438959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-663000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000: exit status 7 (32.704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-117000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (33.456042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-117000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-117000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-117000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.354875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-117000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-117000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (29.943292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-663000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-663000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-663000 describe deploy/metrics-server -n kube-system: exit status 1 (28.823458ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-663000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-663000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000: exit status 7 (38.300458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-117000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (30.559125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-117000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-117000 --alsologtostderr -v=1: exit status 83 (50.840625ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-117000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-117000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:43.580063    5647 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:43.580240    5647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:43.580243    5647 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:43.580246    5647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:43.580385    5647 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:55:43.580624    5647 out.go:352] Setting JSON to false
	I0916 10:55:43.580630    5647 mustload.go:65] Loading cluster: no-preload-117000
	I0916 10:55:43.580872    5647 config.go:182] Loaded profile config "no-preload-117000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:55:43.584663    5647 out.go:177] * The control-plane node no-preload-117000 host is not running: state=Stopped
	I0916 10:55:43.594784    5647 out.go:177]   To start a cluster, run: "minikube start -p no-preload-117000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-117000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (37.28575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (28.737666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-665000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-665000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.040911667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-665000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-665000" primary control-plane node in "default-k8s-diff-port-665000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-665000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:44.013136    5682 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:44.013264    5682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:44.013268    5682 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:44.013270    5682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:44.013409    5682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:55:44.014498    5682 out.go:352] Setting JSON to false
	I0916 10:55:44.030574    5682 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3308,"bootTime":1726506036,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:55:44.030647    5682 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:55:44.035830    5682 out.go:177] * [default-k8s-diff-port-665000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:55:44.043811    5682 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:55:44.043834    5682 notify.go:220] Checking for updates...
	I0916 10:55:44.050750    5682 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:55:44.052326    5682 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:55:44.055758    5682 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:55:44.058756    5682 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:55:44.061855    5682 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:55:44.065119    5682 config.go:182] Loaded profile config "embed-certs-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:55:44.065183    5682 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:55:44.065239    5682 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:55:44.069775    5682 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:55:44.076771    5682 start.go:297] selected driver: qemu2
	I0916 10:55:44.076777    5682 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:55:44.076783    5682 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:55:44.079054    5682 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:55:44.081769    5682 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:55:44.084880    5682 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:55:44.084904    5682 cni.go:84] Creating CNI manager for ""
	I0916 10:55:44.084932    5682 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:55:44.084938    5682 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:55:44.084973    5682 start.go:340] cluster config:
	{Name:default-k8s-diff-port-665000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-665000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:44.088742    5682 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:44.095737    5682 out.go:177] * Starting "default-k8s-diff-port-665000" primary control-plane node in "default-k8s-diff-port-665000" cluster
	I0916 10:55:44.098733    5682 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:55:44.098747    5682 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:55:44.098757    5682 cache.go:56] Caching tarball of preloaded images
	I0916 10:55:44.098825    5682 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:55:44.098830    5682 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:55:44.098879    5682 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/default-k8s-diff-port-665000/config.json ...
	I0916 10:55:44.098890    5682 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/default-k8s-diff-port-665000/config.json: {Name:mkd9edb1b2bb147126b68c7d376ba0544cf7fdab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:55:44.099092    5682 start.go:360] acquireMachinesLock for default-k8s-diff-port-665000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:44.099128    5682 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "default-k8s-diff-port-665000"
	I0916 10:55:44.099139    5682 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-665000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-665000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:55:44.099167    5682 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:55:44.106651    5682 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:55:44.124598    5682 start.go:159] libmachine.API.Create for "default-k8s-diff-port-665000" (driver="qemu2")
	I0916 10:55:44.124627    5682 client.go:168] LocalClient.Create starting
	I0916 10:55:44.124685    5682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:55:44.124716    5682 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:44.124726    5682 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:44.124762    5682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:55:44.124790    5682 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:44.124797    5682 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:44.125135    5682 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:55:44.290698    5682 main.go:141] libmachine: Creating SSH key...
	I0916 10:55:44.443251    5682 main.go:141] libmachine: Creating Disk image...
	I0916 10:55:44.443265    5682 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:55:44.443478    5682 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2
	I0916 10:55:44.453158    5682 main.go:141] libmachine: STDOUT: 
	I0916 10:55:44.453177    5682 main.go:141] libmachine: STDERR: 
	I0916 10:55:44.453245    5682 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2 +20000M
	I0916 10:55:44.461329    5682 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:55:44.461344    5682 main.go:141] libmachine: STDERR: 
	I0916 10:55:44.461361    5682 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2
	I0916 10:55:44.461369    5682 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:55:44.461381    5682 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:44.461403    5682 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:8e:4c:31:19:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2
	I0916 10:55:44.463035    5682 main.go:141] libmachine: STDOUT: 
	I0916 10:55:44.463048    5682 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:44.463069    5682 client.go:171] duration metric: took 338.446042ms to LocalClient.Create
	I0916 10:55:46.465240    5682 start.go:128] duration metric: took 2.366109417s to createHost
	I0916 10:55:46.465319    5682 start.go:83] releasing machines lock for "default-k8s-diff-port-665000", held for 2.366247833s
	W0916 10:55:46.465402    5682 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:46.481774    5682 out.go:177] * Deleting "default-k8s-diff-port-665000" in qemu2 ...
	W0916 10:55:46.513267    5682 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:46.513285    5682 start.go:729] Will try again in 5 seconds ...
	I0916 10:55:51.515271    5682 start.go:360] acquireMachinesLock for default-k8s-diff-port-665000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:51.522301    5682 start.go:364] duration metric: took 6.949167ms to acquireMachinesLock for "default-k8s-diff-port-665000"
	I0916 10:55:51.522361    5682 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-665000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-665000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:55:51.522591    5682 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:55:51.531390    5682 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:55:51.580852    5682 start.go:159] libmachine.API.Create for "default-k8s-diff-port-665000" (driver="qemu2")
	I0916 10:55:51.580899    5682 client.go:168] LocalClient.Create starting
	I0916 10:55:51.580998    5682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:55:51.581058    5682 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:51.581076    5682 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:51.581122    5682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:55:51.581162    5682 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:51.581175    5682 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:51.581630    5682 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:55:51.755602    5682 main.go:141] libmachine: Creating SSH key...
	I0916 10:55:51.955377    5682 main.go:141] libmachine: Creating Disk image...
	I0916 10:55:51.955390    5682 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:55:51.955572    5682 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2
	I0916 10:55:51.970357    5682 main.go:141] libmachine: STDOUT: 
	I0916 10:55:51.970381    5682 main.go:141] libmachine: STDERR: 
	I0916 10:55:51.970450    5682 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2 +20000M
	I0916 10:55:51.984354    5682 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:55:51.984374    5682 main.go:141] libmachine: STDERR: 
	I0916 10:55:51.984390    5682 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2
	I0916 10:55:51.984398    5682 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:55:51.984406    5682 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:51.984439    5682 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:34:75:71:6e:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2
	I0916 10:55:51.986382    5682 main.go:141] libmachine: STDOUT: 
	I0916 10:55:51.986400    5682 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:51.986420    5682 client.go:171] duration metric: took 405.528625ms to LocalClient.Create
	I0916 10:55:53.988555    5682 start.go:128] duration metric: took 2.466005291s to createHost
	I0916 10:55:53.988640    5682 start.go:83] releasing machines lock for "default-k8s-diff-port-665000", held for 2.466388625s
	W0916 10:55:53.988931    5682 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-665000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-665000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:54.002476    5682 out.go:201] 
	W0916 10:55:54.006703    5682 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:55:54.006729    5682 out.go:270] * 
	* 
	W0916 10:55:54.008711    5682 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:55:54.017652    5682 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-665000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000: exit status 7 (51.299291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-663000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-663000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.734649458s)

                                                
                                                
-- stdout --
	* [embed-certs-663000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-663000" primary control-plane node in "embed-certs-663000" cluster
	* Restarting existing qemu2 VM for "embed-certs-663000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-663000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:45.856045    5702 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:45.856176    5702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:45.856179    5702 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:45.856181    5702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:45.856306    5702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:55:45.857292    5702 out.go:352] Setting JSON to false
	I0916 10:55:45.873356    5702 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3309,"bootTime":1726506036,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:55:45.873428    5702 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:55:45.878909    5702 out.go:177] * [embed-certs-663000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:55:45.885703    5702 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:55:45.885762    5702 notify.go:220] Checking for updates...
	I0916 10:55:45.893862    5702 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:55:45.896815    5702 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:55:45.899862    5702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:55:45.902923    5702 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:55:45.904417    5702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:55:45.908105    5702 config.go:182] Loaded profile config "embed-certs-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:55:45.908372    5702 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:55:45.912916    5702 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:55:45.918845    5702 start.go:297] selected driver: qemu2
	I0916 10:55:45.918852    5702 start.go:901] validating driver "qemu2" against &{Name:embed-certs-663000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:45.918938    5702 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:55:45.921555    5702 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:55:45.921586    5702 cni.go:84] Creating CNI manager for ""
	I0916 10:55:45.921612    5702 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:55:45.921651    5702 start.go:340] cluster config:
	{Name:embed-certs-663000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-663000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:45.925383    5702 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:45.932807    5702 out.go:177] * Starting "embed-certs-663000" primary control-plane node in "embed-certs-663000" cluster
	I0916 10:55:45.936885    5702 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:55:45.936903    5702 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:55:45.936915    5702 cache.go:56] Caching tarball of preloaded images
	I0916 10:55:45.936972    5702 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:55:45.936977    5702 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:55:45.937039    5702 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/embed-certs-663000/config.json ...
	I0916 10:55:45.937607    5702 start.go:360] acquireMachinesLock for embed-certs-663000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:46.465452    5702 start.go:364] duration metric: took 527.826542ms to acquireMachinesLock for "embed-certs-663000"
	I0916 10:55:46.465612    5702 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:55:46.465648    5702 fix.go:54] fixHost starting: 
	I0916 10:55:46.466328    5702 fix.go:112] recreateIfNeeded on embed-certs-663000: state=Stopped err=<nil>
	W0916 10:55:46.466379    5702 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:55:46.475509    5702 out.go:177] * Restarting existing qemu2 VM for "embed-certs-663000" ...
	I0916 10:55:46.484722    5702 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:46.484934    5702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:cc:23:9d:36:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2
	I0916 10:55:46.496080    5702 main.go:141] libmachine: STDOUT: 
	I0916 10:55:46.496149    5702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:46.496289    5702 fix.go:56] duration metric: took 30.643334ms for fixHost
	I0916 10:55:46.496315    5702 start.go:83] releasing machines lock for "embed-certs-663000", held for 30.823875ms
	W0916 10:55:46.496345    5702 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:55:46.496477    5702 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:46.496493    5702 start.go:729] Will try again in 5 seconds ...
	I0916 10:55:51.498531    5702 start.go:360] acquireMachinesLock for embed-certs-663000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:51.498999    5702 start.go:364] duration metric: took 365.542µs to acquireMachinesLock for "embed-certs-663000"
	I0916 10:55:51.499149    5702 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:55:51.499169    5702 fix.go:54] fixHost starting: 
	I0916 10:55:51.499945    5702 fix.go:112] recreateIfNeeded on embed-certs-663000: state=Stopped err=<nil>
	W0916 10:55:51.499976    5702 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:55:51.508298    5702 out.go:177] * Restarting existing qemu2 VM for "embed-certs-663000" ...
	I0916 10:55:51.512403    5702 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:51.512580    5702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:cc:23:9d:36:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/embed-certs-663000/disk.qcow2
	I0916 10:55:51.522087    5702 main.go:141] libmachine: STDOUT: 
	I0916 10:55:51.522141    5702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:51.522224    5702 fix.go:56] duration metric: took 23.055167ms for fixHost
	I0916 10:55:51.522243    5702 start.go:83] releasing machines lock for "embed-certs-663000", held for 23.220084ms
	W0916 10:55:51.522415    5702 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-663000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-663000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:51.537371    5702 out.go:201] 
	W0916 10:55:51.541431    5702 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:55:51.541461    5702 out.go:270] * 
	* 
	W0916 10:55:51.543598    5702 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:55:51.552122    5702 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-663000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000: exit status 7 (49.143541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-663000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000: exit status 7 (34.084042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-663000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-663000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-663000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.473417ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-663000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-663000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000: exit status 7 (33.816125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-663000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000: exit status 7 (30.944458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-663000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-663000 --alsologtostderr -v=1: exit status 83 (45.2895ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-663000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:51.821134    5722 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:51.821301    5722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:51.821306    5722 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:51.821309    5722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:51.821439    5722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:55:51.821643    5722 out.go:352] Setting JSON to false
	I0916 10:55:51.821650    5722 mustload.go:65] Loading cluster: embed-certs-663000
	I0916 10:55:51.821873    5722 config.go:182] Loaded profile config "embed-certs-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:55:51.825361    5722 out.go:177] * The control-plane node embed-certs-663000 host is not running: state=Stopped
	I0916 10:55:51.829303    5722 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-663000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-663000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000: exit status 7 (31.304917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-663000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000: exit status 7 (30.222667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-296000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-296000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (11.607701125s)

                                                
                                                
-- stdout --
	* [newest-cni-296000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-296000" primary control-plane node in "newest-cni-296000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-296000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:52.146282    5742 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:52.146438    5742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:52.146441    5742 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:52.146443    5742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:52.146564    5742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:55:52.147635    5742 out.go:352] Setting JSON to false
	I0916 10:55:52.163902    5742 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3316,"bootTime":1726506036,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:55:52.163970    5742 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:55:52.168271    5742 out.go:177] * [newest-cni-296000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:55:52.176275    5742 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:55:52.176308    5742 notify.go:220] Checking for updates...
	I0916 10:55:52.182339    5742 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:55:52.185296    5742 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:55:52.188305    5742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:55:52.196203    5742 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:55:52.199275    5742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:55:52.202673    5742 config.go:182] Loaded profile config "default-k8s-diff-port-665000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:55:52.202735    5742 config.go:182] Loaded profile config "multinode-416000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:55:52.202786    5742 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:55:52.207276    5742 out.go:177] * Using the qemu2 driver based on user configuration
	I0916 10:55:52.213166    5742 start.go:297] selected driver: qemu2
	I0916 10:55:52.213172    5742 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:55:52.213177    5742 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:55:52.215679    5742 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0916 10:55:52.215721    5742 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0916 10:55:52.224274    5742 out.go:177] * Automatically selected the socket_vmnet network
	I0916 10:55:52.227294    5742 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0916 10:55:52.227318    5742 cni.go:84] Creating CNI manager for ""
	I0916 10:55:52.227349    5742 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:55:52.227357    5742 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:55:52.227384    5742 start.go:340] cluster config:
	{Name:newest-cni-296000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:52.231095    5742 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:52.239296    5742 out.go:177] * Starting "newest-cni-296000" primary control-plane node in "newest-cni-296000" cluster
	I0916 10:55:52.243212    5742 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:55:52.243229    5742 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:55:52.243246    5742 cache.go:56] Caching tarball of preloaded images
	I0916 10:55:52.243322    5742 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:55:52.243328    5742 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:55:52.243414    5742 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/newest-cni-296000/config.json ...
	I0916 10:55:52.243425    5742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/newest-cni-296000/config.json: {Name:mk684687d9118f95ee452047470d8f891a96bd92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:55:52.243857    5742 start.go:360] acquireMachinesLock for newest-cni-296000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:53.988769    5742 start.go:364] duration metric: took 1.744917083s to acquireMachinesLock for "newest-cni-296000"
	I0916 10:55:53.988955    5742 start.go:93] Provisioning new machine with config: &{Name:newest-cni-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:55:53.989162    5742 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:55:53.998641    5742 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:55:54.047861    5742 start.go:159] libmachine.API.Create for "newest-cni-296000" (driver="qemu2")
	I0916 10:55:54.047918    5742 client.go:168] LocalClient.Create starting
	I0916 10:55:54.048021    5742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:55:54.048072    5742 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:54.048091    5742 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:54.048151    5742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:55:54.048195    5742 main.go:141] libmachine: Decoding PEM data...
	I0916 10:55:54.048206    5742 main.go:141] libmachine: Parsing certificate...
	I0916 10:55:54.048877    5742 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:55:54.233513    5742 main.go:141] libmachine: Creating SSH key...
	I0916 10:55:54.282538    5742 main.go:141] libmachine: Creating Disk image...
	I0916 10:55:54.282550    5742 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:55:54.282778    5742 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2
	I0916 10:55:54.292694    5742 main.go:141] libmachine: STDOUT: 
	I0916 10:55:54.292721    5742 main.go:141] libmachine: STDERR: 
	I0916 10:55:54.292799    5742 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2 +20000M
	I0916 10:55:54.302391    5742 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:55:54.302418    5742 main.go:141] libmachine: STDERR: 
	I0916 10:55:54.302434    5742 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2
	I0916 10:55:54.302440    5742 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:55:54.302453    5742 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:54.302482    5742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:80:06:7b:20:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2
	I0916 10:55:54.304620    5742 main.go:141] libmachine: STDOUT: 
	I0916 10:55:54.304667    5742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:54.304706    5742 client.go:171] duration metric: took 256.784458ms to LocalClient.Create
	I0916 10:55:56.307191    5742 start.go:128] duration metric: took 2.318044708s to createHost
	I0916 10:55:56.307283    5742 start.go:83] releasing machines lock for "newest-cni-296000", held for 2.318524458s
	W0916 10:55:56.307332    5742 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:56.317669    5742 out.go:177] * Deleting "newest-cni-296000" in qemu2 ...
	W0916 10:55:56.348180    5742 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:56.348203    5742 start.go:729] Will try again in 5 seconds ...
	I0916 10:56:01.350268    5742 start.go:360] acquireMachinesLock for newest-cni-296000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:56:01.350651    5742 start.go:364] duration metric: took 288.25µs to acquireMachinesLock for "newest-cni-296000"
	I0916 10:56:01.350769    5742 start.go:93] Provisioning new machine with config: &{Name:newest-cni-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 10:56:01.351036    5742 start.go:125] createHost starting for "" (driver="qemu2")
	I0916 10:56:01.364568    5742 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 10:56:01.415022    5742 start.go:159] libmachine.API.Create for "newest-cni-296000" (driver="qemu2")
	I0916 10:56:01.415082    5742 client.go:168] LocalClient.Create starting
	I0916 10:56:01.415199    5742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/ca.pem
	I0916 10:56:01.415264    5742 main.go:141] libmachine: Decoding PEM data...
	I0916 10:56:01.415281    5742 main.go:141] libmachine: Parsing certificate...
	I0916 10:56:01.415348    5742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19649-964/.minikube/certs/cert.pem
	I0916 10:56:01.415395    5742 main.go:141] libmachine: Decoding PEM data...
	I0916 10:56:01.415406    5742 main.go:141] libmachine: Parsing certificate...
	I0916 10:56:01.415926    5742 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19649-964/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso...
	I0916 10:56:01.588878    5742 main.go:141] libmachine: Creating SSH key...
	I0916 10:56:01.641237    5742 main.go:141] libmachine: Creating Disk image...
	I0916 10:56:01.641246    5742 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0916 10:56:01.641432    5742 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2.raw /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2
	I0916 10:56:01.650456    5742 main.go:141] libmachine: STDOUT: 
	I0916 10:56:01.650473    5742 main.go:141] libmachine: STDERR: 
	I0916 10:56:01.650533    5742 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2 +20000M
	I0916 10:56:01.658349    5742 main.go:141] libmachine: STDOUT: Image resized.
	
	I0916 10:56:01.658367    5742 main.go:141] libmachine: STDERR: 
	I0916 10:56:01.658378    5742 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2
	I0916 10:56:01.658382    5742 main.go:141] libmachine: Starting QEMU VM...
	I0916 10:56:01.658392    5742 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:56:01.658425    5742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:28:d7:80:fc:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2
	I0916 10:56:01.660116    5742 main.go:141] libmachine: STDOUT: 
	I0916 10:56:01.660136    5742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:56:01.660148    5742 client.go:171] duration metric: took 245.065916ms to LocalClient.Create
	I0916 10:56:03.662280    5742 start.go:128] duration metric: took 2.311254541s to createHost
	I0916 10:56:03.662339    5742 start.go:83] releasing machines lock for "newest-cni-296000", held for 2.311736333s
	W0916 10:56:03.662617    5742 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-296000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-296000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:56:03.677253    5742 out.go:201] 
	W0916 10:56:03.689267    5742 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:56:03.689294    5742 out.go:270] * 
	* 
	W0916 10:56:03.691879    5742 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:56:03.703006    5742 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-296000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-296000 -n newest-cni-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-296000 -n newest-cni-296000: exit status 7 (63.3695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-665000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-665000 create -f testdata/busybox.yaml: exit status 1 (31.393375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-665000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-665000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000: exit status 7 (34.166125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-665000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000: exit status 7 (33.511959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-665000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-665000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-665000 describe deploy/metrics-server -n kube-system: exit status 1 (27.833875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-665000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-665000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000: exit status 7 (29.658333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-665000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-665000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.683369042s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-665000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-665000" primary control-plane node in "default-k8s-diff-port-665000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-665000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-665000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:55:58.083921    5786 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:55:58.084092    5786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:58.084096    5786 out.go:358] Setting ErrFile to fd 2...
	I0916 10:55:58.084098    5786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:55:58.084241    5786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:55:58.085299    5786 out.go:352] Setting JSON to false
	I0916 10:55:58.101434    5786 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3322,"bootTime":1726506036,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:55:58.101504    5786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:55:58.105243    5786 out.go:177] * [default-k8s-diff-port-665000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:55:58.112133    5786 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:55:58.112207    5786 notify.go:220] Checking for updates...
	I0916 10:55:58.120186    5786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:55:58.123184    5786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:55:58.126211    5786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:55:58.129220    5786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:55:58.132140    5786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:55:58.135513    5786 config.go:182] Loaded profile config "default-k8s-diff-port-665000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:55:58.135801    5786 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:55:58.140105    5786 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:55:58.147181    5786 start.go:297] selected driver: qemu2
	I0916 10:55:58.147187    5786 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-665000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-665000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:58.147234    5786 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:55:58.149574    5786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:55:58.149601    5786 cni.go:84] Creating CNI manager for ""
	I0916 10:55:58.149625    5786 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:55:58.149644    5786 start.go:340] cluster config:
	{Name:default-k8s-diff-port-665000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-665000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:55:58.153140    5786 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:55:58.158164    5786 out.go:177] * Starting "default-k8s-diff-port-665000" primary control-plane node in "default-k8s-diff-port-665000" cluster
	I0916 10:55:58.162252    5786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:55:58.162266    5786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:55:58.162276    5786 cache.go:56] Caching tarball of preloaded images
	I0916 10:55:58.162344    5786 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:55:58.162350    5786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:55:58.162424    5786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/default-k8s-diff-port-665000/config.json ...
	I0916 10:55:58.162917    5786 start.go:360] acquireMachinesLock for default-k8s-diff-port-665000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:55:58.162948    5786 start.go:364] duration metric: took 24.833µs to acquireMachinesLock for "default-k8s-diff-port-665000"
	I0916 10:55:58.162956    5786 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:55:58.162961    5786 fix.go:54] fixHost starting: 
	I0916 10:55:58.163072    5786 fix.go:112] recreateIfNeeded on default-k8s-diff-port-665000: state=Stopped err=<nil>
	W0916 10:55:58.163080    5786 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:55:58.167168    5786 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-665000" ...
	I0916 10:55:58.175151    5786 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:55:58.175182    5786 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:34:75:71:6e:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2
	I0916 10:55:58.177281    5786 main.go:141] libmachine: STDOUT: 
	I0916 10:55:58.177304    5786 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:55:58.177329    5786 fix.go:56] duration metric: took 14.367541ms for fixHost
	I0916 10:55:58.177333    5786 start.go:83] releasing machines lock for "default-k8s-diff-port-665000", held for 14.382ms
	W0916 10:55:58.177339    5786 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:55:58.177370    5786 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:55:58.177374    5786 start.go:729] Will try again in 5 seconds ...
	I0916 10:56:03.177956    5786 start.go:360] acquireMachinesLock for default-k8s-diff-port-665000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:56:03.662560    5786 start.go:364] duration metric: took 484.436792ms to acquireMachinesLock for "default-k8s-diff-port-665000"
	I0916 10:56:03.662731    5786 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:56:03.662751    5786 fix.go:54] fixHost starting: 
	I0916 10:56:03.663488    5786 fix.go:112] recreateIfNeeded on default-k8s-diff-port-665000: state=Stopped err=<nil>
	W0916 10:56:03.663514    5786 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:56:03.685031    5786 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-665000" ...
	I0916 10:56:03.692246    5786 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:56:03.692703    5786 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:34:75:71:6e:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/default-k8s-diff-port-665000/disk.qcow2
	I0916 10:56:03.701445    5786 main.go:141] libmachine: STDOUT: 
	I0916 10:56:03.701508    5786 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:56:03.701590    5786 fix.go:56] duration metric: took 38.836417ms for fixHost
	I0916 10:56:03.701611    5786 start.go:83] releasing machines lock for "default-k8s-diff-port-665000", held for 39.004292ms
	W0916 10:56:03.701792    5786 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-665000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-665000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:56:03.715171    5786 out.go:201] 
	W0916 10:56:03.719228    5786 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:56:03.719259    5786 out.go:270] * 
	* 
	W0916 10:56:03.721213    5786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:56:03.730213    5786 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-665000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000: exit status 7 (52.82225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-665000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000: exit status 7 (35.58625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-665000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-665000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-665000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.084958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-665000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-665000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000: exit status 7 (34.530083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-665000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000: exit status 7 (30.032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-665000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-665000 --alsologtostderr -v=1: exit status 83 (40.529708ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-665000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-665000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:56:03.986759    5817 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:56:03.986904    5817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:56:03.986907    5817 out.go:358] Setting ErrFile to fd 2...
	I0916 10:56:03.986910    5817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:56:03.987048    5817 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:56:03.987259    5817 out.go:352] Setting JSON to false
	I0916 10:56:03.987267    5817 mustload.go:65] Loading cluster: default-k8s-diff-port-665000
	I0916 10:56:03.987496    5817 config.go:182] Loaded profile config "default-k8s-diff-port-665000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:56:03.991636    5817 out.go:177] * The control-plane node default-k8s-diff-port-665000 host is not running: state=Stopped
	I0916 10:56:03.995665    5817 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-665000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-665000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000: exit status 7 (29.241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-665000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000: exit status 7 (29.334333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-665000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-296000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-296000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.182421042s)

                                                
                                                
-- stdout --
	* [newest-cni-296000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-296000" primary control-plane node in "newest-cni-296000" cluster
	* Restarting existing qemu2 VM for "newest-cni-296000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-296000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:56:05.929582    5846 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:56:05.929711    5846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:56:05.929714    5846 out.go:358] Setting ErrFile to fd 2...
	I0916 10:56:05.929717    5846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:56:05.929845    5846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:56:05.930887    5846 out.go:352] Setting JSON to false
	I0916 10:56:05.947274    5846 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3329,"bootTime":1726506036,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:56:05.947352    5846 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:56:05.952784    5846 out.go:177] * [newest-cni-296000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:56:05.958728    5846 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:56:05.958782    5846 notify.go:220] Checking for updates...
	I0916 10:56:05.965641    5846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:56:05.968728    5846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:56:05.971726    5846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:56:05.973325    5846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:56:05.976715    5846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:56:05.980022    5846 config.go:182] Loaded profile config "newest-cni-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:56:05.980306    5846 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:56:05.984580    5846 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:56:05.991687    5846 start.go:297] selected driver: qemu2
	I0916 10:56:05.991694    5846 start.go:901] validating driver "qemu2" against &{Name:newest-cni-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:56:05.991777    5846 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:56:05.994148    5846 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0916 10:56:05.994170    5846 cni.go:84] Creating CNI manager for ""
	I0916 10:56:05.994194    5846 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:56:05.994222    5846 start.go:340] cluster config:
	{Name:newest-cni-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-296000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:56:05.997758    5846 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:56:06.005756    5846 out.go:177] * Starting "newest-cni-296000" primary control-plane node in "newest-cni-296000" cluster
	I0916 10:56:06.009537    5846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:56:06.009556    5846 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:56:06.009573    5846 cache.go:56] Caching tarball of preloaded images
	I0916 10:56:06.009646    5846 preload.go:172] Found /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0916 10:56:06.009654    5846 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:56:06.009718    5846 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/newest-cni-296000/config.json ...
	I0916 10:56:06.010208    5846 start.go:360] acquireMachinesLock for newest-cni-296000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:56:06.010244    5846 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "newest-cni-296000"
	I0916 10:56:06.010253    5846 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:56:06.010258    5846 fix.go:54] fixHost starting: 
	I0916 10:56:06.010383    5846 fix.go:112] recreateIfNeeded on newest-cni-296000: state=Stopped err=<nil>
	W0916 10:56:06.010393    5846 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:56:06.013841    5846 out.go:177] * Restarting existing qemu2 VM for "newest-cni-296000" ...
	I0916 10:56:06.021740    5846 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:56:06.021783    5846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:28:d7:80:fc:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2
	I0916 10:56:06.023847    5846 main.go:141] libmachine: STDOUT: 
	I0916 10:56:06.023866    5846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:56:06.023898    5846 fix.go:56] duration metric: took 13.639708ms for fixHost
	I0916 10:56:06.023903    5846 start.go:83] releasing machines lock for "newest-cni-296000", held for 13.654375ms
	W0916 10:56:06.023909    5846 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:56:06.023941    5846 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:56:06.023946    5846 start.go:729] Will try again in 5 seconds ...
	I0916 10:56:11.025952    5846 start.go:360] acquireMachinesLock for newest-cni-296000: {Name:mkf0efe8505e51edc938b4aab4f71986eeb6c134 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 10:56:11.026361    5846 start.go:364] duration metric: took 327.5µs to acquireMachinesLock for "newest-cni-296000"
	I0916 10:56:11.026492    5846 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:56:11.026511    5846 fix.go:54] fixHost starting: 
	I0916 10:56:11.027213    5846 fix.go:112] recreateIfNeeded on newest-cni-296000: state=Stopped err=<nil>
	W0916 10:56:11.027245    5846 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:56:11.036565    5846 out.go:177] * Restarting existing qemu2 VM for "newest-cni-296000" ...
	I0916 10:56:11.038168    5846 qemu.go:418] Using hvf for hardware acceleration
	I0916 10:56:11.038416    5846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:28:d7:80:fc:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19649-964/.minikube/machines/newest-cni-296000/disk.qcow2
	I0916 10:56:11.047292    5846 main.go:141] libmachine: STDOUT: 
	I0916 10:56:11.047372    5846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0916 10:56:11.047479    5846 fix.go:56] duration metric: took 20.963333ms for fixHost
	I0916 10:56:11.047500    5846 start.go:83] releasing machines lock for "newest-cni-296000", held for 21.114583ms
	W0916 10:56:11.047713    5846 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-296000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0916 10:56:11.055556    5846 out.go:201] 
	W0916 10:56:11.058554    5846 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0916 10:56:11.058585    5846 out.go:270] * 
	* 
	W0916 10:56:11.061741    5846 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:56:11.072533    5846 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-296000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-296000 -n newest-cni-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-296000 -n newest-cni-296000: exit status 7 (66.606792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-296000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-296000 -n newest-cni-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-296000 -n newest-cni-296000: exit status 7 (29.117625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-296000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-296000 --alsologtostderr -v=1: exit status 83 (42.222416ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-296000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:56:11.254688    5860 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:56:11.254837    5860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:56:11.254843    5860 out.go:358] Setting ErrFile to fd 2...
	I0916 10:56:11.254846    5860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:56:11.254978    5860 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:56:11.255203    5860 out.go:352] Setting JSON to false
	I0916 10:56:11.255208    5860 mustload.go:65] Loading cluster: newest-cni-296000
	I0916 10:56:11.255436    5860 config.go:182] Loaded profile config "newest-cni-296000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:56:11.259752    5860 out.go:177] * The control-plane node newest-cni-296000 host is not running: state=Stopped
	I0916 10:56:11.263807    5860 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-296000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-296000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-296000 -n newest-cni-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-296000 -n newest-cni-296000: exit status 7 (30.024708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-296000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-296000 -n newest-cni-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-296000 -n newest-cni-296000: exit status 7 (30.21525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (154/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 6.1
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.1
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 140.43
29 TestAddons/serial/Volcano 39.18
31 TestAddons/serial/GCPAuth/Namespaces 0.08
34 TestAddons/parallel/Ingress 17.47
35 TestAddons/parallel/InspektorGadget 10.29
36 TestAddons/parallel/MetricsServer 5.26
39 TestAddons/parallel/CSI 42.72
40 TestAddons/parallel/Headlamp 16.65
41 TestAddons/parallel/CloudSpanner 5.17
42 TestAddons/parallel/LocalPath 41.96
43 TestAddons/parallel/NvidiaDevicePlugin 6.15
44 TestAddons/parallel/Yakd 10.27
45 TestAddons/StoppedEnableDisable 12.4
53 TestHyperKitDriverInstallOrUpdate 10.3
56 TestErrorSpam/setup 33.82
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.73
60 TestErrorSpam/unpause 0.63
61 TestErrorSpam/stop 64.28
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 47.95
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 35.51
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.71
73 TestFunctional/serial/CacheCmd/cache/add_local 1.6
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.67
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 2.02
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
81 TestFunctional/serial/ExtraConfig 38.99
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.64
84 TestFunctional/serial/LogsFileCmd 0.66
85 TestFunctional/serial/InvalidService 4.56
87 TestFunctional/parallel/ConfigCmd 0.24
88 TestFunctional/parallel/DashboardCmd 6.96
89 TestFunctional/parallel/DryRun 0.22
90 TestFunctional/parallel/InternationalLanguage 0.11
91 TestFunctional/parallel/StatusCmd 0.24
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 24.94
99 TestFunctional/parallel/SSHCmd 0.14
100 TestFunctional/parallel/CpCmd 0.46
102 TestFunctional/parallel/FileSync 0.08
103 TestFunctional/parallel/CertSync 0.43
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
111 TestFunctional/parallel/License 0.24
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.1
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
124 TestFunctional/parallel/ServiceCmd/List 0.31
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.12
127 TestFunctional/parallel/ServiceCmd/Format 0.1
128 TestFunctional/parallel/ServiceCmd/URL 0.1
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
130 TestFunctional/parallel/ProfileCmd/profile_list 0.13
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
132 TestFunctional/parallel/MountCmd/any-port 5.13
133 TestFunctional/parallel/MountCmd/specific-port 1.08
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
135 TestFunctional/parallel/Version/short 0.04
136 TestFunctional/parallel/Version/components 0.17
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.09
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
141 TestFunctional/parallel/ImageCommands/ImageBuild 1.95
142 TestFunctional/parallel/ImageCommands/Setup 1.84
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.2
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.29
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.26
150 TestFunctional/parallel/DockerEnv/bash 0.29
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 175.36
161 TestMultiControlPlane/serial/DeployApp 4.42
162 TestMultiControlPlane/serial/PingHostFromPods 0.73
163 TestMultiControlPlane/serial/AddWorkerNode 76.95
164 TestMultiControlPlane/serial/NodeLabels 0.17
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
166 TestMultiControlPlane/serial/CopyFile 4.12
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 29.44
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 3.57
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
259 TestStoppedBinaryUpgrade/Setup 0.97
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.33
277 TestNoKubernetes/serial/Stop 3.75
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.73
294 TestStartStop/group/old-k8s-version/serial/Stop 2.06
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
307 TestStartStop/group/no-preload/serial/Stop 3.5
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
316 TestStartStop/group/embed-certs/serial/Stop 2.11
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.61
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
336 TestStartStop/group/newest-cni/serial/Stop 1.93
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-863000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-863000: exit status 85 (90.898041ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-863000 | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT |          |
	|         | -p download-only-863000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:04:21
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:04:21.999490    1453 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:04:21.999659    1453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:04:21.999662    1453 out.go:358] Setting ErrFile to fd 2...
	I0916 10:04:21.999664    1453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:04:21.999784    1453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	W0916 10:04:21.999883    1453 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19649-964/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19649-964/.minikube/config/config.json: no such file or directory
	I0916 10:04:22.001143    1453 out.go:352] Setting JSON to true
	I0916 10:04:22.019332    1453 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":226,"bootTime":1726506036,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:04:22.019392    1453 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:04:22.022213    1453 out.go:97] [download-only-863000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:04:22.022361    1453 notify.go:220] Checking for updates...
	W0916 10:04:22.022416    1453 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:04:22.027030    1453 out.go:169] MINIKUBE_LOCATION=19649
	I0916 10:04:22.032071    1453 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:04:22.035070    1453 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:04:22.039055    1453 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:04:22.042117    1453 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	W0916 10:04:22.048047    1453 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 10:04:22.048267    1453 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:04:22.054071    1453 out.go:97] Using the qemu2 driver based on user configuration
	I0916 10:04:22.054092    1453 start.go:297] selected driver: qemu2
	I0916 10:04:22.054105    1453 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:04:22.054184    1453 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:04:22.058069    1453 out.go:169] Automatically selected the socket_vmnet network
	I0916 10:04:22.063733    1453 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0916 10:04:22.063815    1453 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:04:22.063856    1453 cni.go:84] Creating CNI manager for ""
	I0916 10:04:22.063885    1453 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 10:04:22.063932    1453 start.go:340] cluster config:
	{Name:download-only-863000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-863000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:04:22.069421    1453 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:04:22.072114    1453 out.go:97] Downloading VM boot image ...
	I0916 10:04:22.072130    1453 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/iso/arm64/minikube-v1.34.0-1726481713-19649-arm64.iso
	I0916 10:04:29.889012    1453 out.go:97] Starting "download-only-863000" primary control-plane node in "download-only-863000" cluster
	I0916 10:04:29.889035    1453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:04:29.952153    1453 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 10:04:29.952178    1453 cache.go:56] Caching tarball of preloaded images
	I0916 10:04:29.952371    1453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:04:29.956605    1453 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 10:04:29.956612    1453 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 10:04:30.033551    1453 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0916 10:04:35.207493    1453 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 10:04:35.207655    1453 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0916 10:04:35.903452    1453 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0916 10:04:35.903671    1453 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/download-only-863000/config.json ...
	I0916 10:04:35.903691    1453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/download-only-863000/config.json: {Name:mk2e69a70769f5ad88b914cb7686bf971e95ba03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:04:35.903939    1453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 10:04:35.904134    1453 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0916 10:04:36.412159    1453 out.go:193] 
	W0916 10:04:36.418100    1453 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19649-964/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109001780 0x109001780 0x109001780 0x109001780 0x109001780 0x109001780 0x109001780] Decompressors:map[bz2:0x140007021b0 gz:0x140007021b8 tar:0x14000702150 tar.bz2:0x14000702160 tar.gz:0x14000702170 tar.xz:0x14000702190 tar.zst:0x140007021a0 tbz2:0x14000702160 tgz:0x14000702170 txz:0x14000702190 tzst:0x140007021a0 xz:0x14000702200 zip:0x14000702210 zst:0x14000702208] Getters:map[file:0x14000802790 http:0x140004a2190 https:0x140004a2320] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0916 10:04:36.418126    1453 out_reason.go:110] 
	W0916 10:04:36.429057    1453 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 10:04:36.432969    1453 out.go:193] 
	
	
	* The control-plane node download-only-863000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-863000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-863000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-699000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-699000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (6.103968208s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-699000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-699000: exit status 85 (75.445167ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-863000 | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT |                     |
	|         | -p download-only-863000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT | 16 Sep 24 10:04 PDT |
	| delete  | -p download-only-863000        | download-only-863000 | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT | 16 Sep 24 10:04 PDT |
	| start   | -o=json --download-only        | download-only-699000 | jenkins | v1.34.0 | 16 Sep 24 10:04 PDT |                     |
	|         | -p download-only-699000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:04:36
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:04:36.835559    1477 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:04:36.835698    1477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:04:36.835701    1477 out.go:358] Setting ErrFile to fd 2...
	I0916 10:04:36.835703    1477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:04:36.835835    1477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:04:36.836896    1477 out.go:352] Setting JSON to true
	I0916 10:04:36.853093    1477 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":240,"bootTime":1726506036,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:04:36.853159    1477 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:04:36.857945    1477 out.go:97] [download-only-699000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:04:36.858047    1477 notify.go:220] Checking for updates...
	I0916 10:04:36.861877    1477 out.go:169] MINIKUBE_LOCATION=19649
	I0916 10:04:36.864963    1477 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:04:36.869980    1477 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:04:36.872934    1477 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:04:36.875912    1477 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	W0916 10:04:36.881786    1477 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 10:04:36.881933    1477 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:04:36.884861    1477 out.go:97] Using the qemu2 driver based on user configuration
	I0916 10:04:36.884869    1477 start.go:297] selected driver: qemu2
	I0916 10:04:36.884872    1477 start.go:901] validating driver "qemu2" against <nil>
	I0916 10:04:36.884929    1477 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:04:36.887912    1477 out.go:169] Automatically selected the socket_vmnet network
	I0916 10:04:36.891532    1477 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0916 10:04:36.891685    1477 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:04:36.891704    1477 cni.go:84] Creating CNI manager for ""
	I0916 10:04:36.891725    1477 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 10:04:36.891731    1477 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 10:04:36.891767    1477 start.go:340] cluster config:
	{Name:download-only-699000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-699000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:04:36.895127    1477 iso.go:125] acquiring lock: {Name:mk62b7d566d6b7389b55e81716a64201c74e95ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:04:36.898994    1477 out.go:97] Starting "download-only-699000" primary control-plane node in "download-only-699000" cluster
	I0916 10:04:36.899002    1477 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:04:36.954018    1477 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:04:36.954055    1477 cache.go:56] Caching tarball of preloaded images
	I0916 10:04:36.954234    1477 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:04:36.959324    1477 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0916 10:04:36.959331    1477 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0916 10:04:37.042774    1477 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0916 10:04:41.223953    1477 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0916 10:04:41.224131    1477 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19649-964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0916 10:04:41.746755    1477 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 10:04:41.746984    1477 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/download-only-699000/config.json ...
	I0916 10:04:41.746999    1477 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/download-only-699000/config.json: {Name:mk78c15bae2b4d35d2174a43867a78f7d9e6299c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:04:41.747394    1477 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 10:04:41.747533    1477 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19649-964/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-699000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-699000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-699000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-138000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-138000: exit status 85 (67.193583ms)

                                                
                                                
-- stdout --
	* Profile "addons-138000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-138000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-138000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-138000: exit status 85 (63.844791ms)

                                                
                                                
-- stdout --
	* Profile "addons-138000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-138000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (140.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-138000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-138000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (2m20.430972208s)
--- PASS: TestAddons/Setup (140.43s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.18s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 7.488583ms
addons_test.go:913: volcano-controller stabilized in 7.527083ms
addons_test.go:905: volcano-admission stabilized in 8.019541ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-tlvj2" [176d0fa6-313c-4701-9d97-c03968288c77] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004187417s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-5rxss" [d6d44de0-ce72-4300-b8a0-8fd0c7e57fd0] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003538709s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-sjkpv" [8812f7ca-00bd-4649-bed7-382109f6e001] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.006126958s
addons_test.go:932: (dbg) Run:  kubectl --context addons-138000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-138000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-138000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [cea1509d-b0f3-455a-8a7b-7871a9451dae] Pending
helpers_test.go:344: "test-job-nginx-0" [cea1509d-b0f3-455a-8a7b-7871a9451dae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [cea1509d-b0f3-455a-8a7b-7871a9451dae] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004922417s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-138000 addons disable volcano --alsologtostderr -v=1: (9.922964084s)
--- PASS: TestAddons/serial/Volcano (39.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-138000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-138000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-138000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-138000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-138000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [01e98ae6-e664-4242-9b9c-2354b2054db2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [01e98ae6-e664-4242-9b9c-2354b2054db2] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.0055195s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-138000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-138000 addons disable ingress --alsologtostderr -v=1: (7.222850125s)
--- PASS: TestAddons/parallel/Ingress (17.47s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rjl25" [e15487c5-5f5c-4979-b10e-52a5bb9b0720] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0087065s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-138000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-138000: (5.283557875s)
--- PASS: TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.282958ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-gscsm" [a83f5052-5c41-46c4-9356-5fb3b5b8759d] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006094s
addons_test.go:417: (dbg) Run:  kubectl --context addons-138000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.996708ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-138000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-138000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5c2f9537-f66e-465c-ae64-3d0f0194aa12] Pending
helpers_test.go:344: "task-pv-pod" [5c2f9537-f66e-465c-ae64-3d0f0194aa12] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5c2f9537-f66e-465c-ae64-3d0f0194aa12] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004156s
addons_test.go:590: (dbg) Run:  kubectl --context addons-138000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-138000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-138000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-138000 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-138000 delete pod task-pv-pod: (1.238188125s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-138000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-138000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-138000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [adfefd43-df47-45a7-8ffa-dd3e5f01b618] Pending
helpers_test.go:344: "task-pv-pod-restore" [adfefd43-df47-45a7-8ffa-dd3e5f01b618] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [adfefd43-df47-45a7-8ffa-dd3e5f01b618] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.007960208s
addons_test.go:632: (dbg) Run:  kubectl --context addons-138000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-138000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-138000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-138000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.130635917s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.72s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-138000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-9xnvz" [3d0009ae-4336-473a-aa4b-31c667a28a6b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-9xnvz" [3d0009ae-4336-473a-aa4b-31c667a28a6b] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.009957208s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-138000 addons disable headlamp --alsologtostderr -v=1: (5.289075917s)
--- PASS: TestAddons/parallel/Headlamp (16.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-zq77f" [e021c57d-1d35-4c61-9818-0bcd25aa95c9] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003760167s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-138000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (41.96s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-138000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-138000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-138000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fcdc8568-6e5f-44ec-9bbb-2a63ac2eef6a] Pending
helpers_test.go:344: "test-local-path" [fcdc8568-6e5f-44ec-9bbb-2a63ac2eef6a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fcdc8568-6e5f-44ec-9bbb-2a63ac2eef6a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fcdc8568-6e5f-44ec-9bbb-2a63ac2eef6a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.005426792s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-138000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 ssh "cat /opt/local-path-provisioner/pvc-f2ee6a12-8e7e-4ac1-8d0e-6b2e562c34f1_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-138000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-138000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-138000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.461234833s)
--- PASS: TestAddons/parallel/LocalPath (41.96s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rzk7g" [95a607d1-6649-4565-b268-b0ee84e53c1b] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003882458s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-138000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-njjrz" [7ccf6992-8658-4265-8cbd-10b0ec70c1b9] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004795875s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-138000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-138000 addons disable yakd --alsologtostderr -v=1: (5.264022792s)
--- PASS: TestAddons/parallel/Yakd (10.27s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-138000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-138000: (12.2061895s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-138000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-138000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-138000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.3s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.30s)

                                                
                                    
x
+
TestErrorSpam/setup (33.82s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-504000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-504000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 --driver=qemu2 : (33.820192458s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (33.82s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 pause
--- PASS: TestErrorSpam/pause (0.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (64.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 stop: (12.206498333s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 stop: (26.040576958s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-504000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-504000 stop: (26.034749375s)
--- PASS: TestErrorSpam/stop (64.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19649-964/.minikube/files/etc/test/nested/copy/1451/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-510000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-510000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (47.945537s)
--- PASS: TestFunctional/serial/StartWithProxy (47.95s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-510000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-510000 --alsologtostderr -v=8: (35.510380875s)
functional_test.go:663: soft start took 35.510788417s for "functional-510000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-510000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-510000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local4202899528/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 cache add minikube-local-cache-test:functional-510000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-510000 cache add minikube-local-cache-test:functional-510000: (1.261750334s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 cache delete minikube-local-cache-test:functional-510000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-510000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-510000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (69.921375ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 kubectl -- --context functional-510000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-510000 kubectl -- --context functional-510000 get pods: (2.020344959s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.02s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-510000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-510000 get pods: (1.016013209s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-510000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-510000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.987983667s)
functional_test.go:761: restart took 38.988091542s for "functional-510000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-510000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1305663586/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.56s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-510000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-510000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-510000: exit status 115 (148.316417ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32381 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-510000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-510000 delete -f testdata/invalidsvc.yaml: (1.312085417s)
--- PASS: TestFunctional/serial/InvalidService (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-510000 config get cpus: exit status 14 (31.683667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-510000 config get cpus: exit status 14 (30.893959ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-510000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-510000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2244: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-510000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-510000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.669333ms)

                                                
                                                
-- stdout --
	* [functional-510000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:22:30.639009    2231 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:22:30.639152    2231 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:30.639155    2231 out.go:358] Setting ErrFile to fd 2...
	I0916 10:22:30.639157    2231 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:30.639265    2231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:22:30.640276    2231 out.go:352] Setting JSON to false
	I0916 10:22:30.656542    2231 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1314,"bootTime":1726506036,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:22:30.656605    2231 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:22:30.661588    2231 out.go:177] * [functional-510000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0916 10:22:30.668439    2231 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:22:30.668529    2231 notify.go:220] Checking for updates...
	I0916 10:22:30.675576    2231 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:22:30.676860    2231 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:22:30.679530    2231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:22:30.682524    2231 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:22:30.685619    2231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:22:30.688754    2231 config.go:182] Loaded profile config "functional-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:22:30.689042    2231 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:22:30.693530    2231 out.go:177] * Using the qemu2 driver based on existing profile
	I0916 10:22:30.700560    2231 start.go:297] selected driver: qemu2
	I0916 10:22:30.700569    2231 start.go:901] validating driver "qemu2" against &{Name:functional-510000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-510000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:30.700628    2231 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:22:30.707496    2231 out.go:201] 
	W0916 10:22:30.711521    2231 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 10:22:30.715478    2231 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-510000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-510000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-510000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.712459ms)

                                                
                                                
-- stdout --
	* [functional-510000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:22:30.520834    2227 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:22:30.520932    2227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:30.520935    2227 out.go:358] Setting ErrFile to fd 2...
	I0916 10:22:30.520938    2227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:30.521067    2227 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
	I0916 10:22:30.522612    2227 out.go:352] Setting JSON to false
	I0916 10:22:30.540143    2227 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1314,"bootTime":1726506036,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0916 10:22:30.540251    2227 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 10:22:30.545608    2227 out.go:177] * [functional-510000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0916 10:22:30.553603    2227 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 10:22:30.553669    2227 notify.go:220] Checking for updates...
	I0916 10:22:30.561462    2227 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	I0916 10:22:30.564483    2227 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0916 10:22:30.567613    2227 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:22:30.570450    2227 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	I0916 10:22:30.573596    2227 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:22:30.576792    2227 config.go:182] Loaded profile config "functional-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 10:22:30.577058    2227 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:22:30.581504    2227 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0916 10:22:30.588531    2227 start.go:297] selected driver: qemu2
	I0916 10:22:30.588538    2227 start.go:901] validating driver "qemu2" against &{Name:functional-510000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-510000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:30.588601    2227 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:22:30.594501    2227 out.go:201] 
	W0916 10:22:30.598567    2227 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 10:22:30.601498    2227 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cdaf1d80-c573-4295-8681-5e8fda8d1812] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00984375s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-510000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-510000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-510000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-510000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6c8be193-079a-47cf-aa80-18644a2134e2] Pending
helpers_test.go:344: "sp-pod" [6c8be193-079a-47cf-aa80-18644a2134e2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6c8be193-079a-47cf-aa80-18644a2134e2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.008816542s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-510000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-510000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-510000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [24dcc45c-91ea-4110-8a85-6414c7b06b1b] Pending
helpers_test.go:344: "sp-pod" [24dcc45c-91ea-4110-8a85-6414c7b06b1b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0916 10:22:09.268835    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [24dcc45c-91ea-4110-8a85-6414c7b06b1b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007344292s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-510000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh -n functional-510000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 cp functional-510000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1025117371/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh -n functional-510000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh -n functional-510000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1451/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "sudo cat /etc/test/nested/copy/1451/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1451.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "sudo cat /etc/ssl/certs/1451.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1451.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "sudo cat /usr/share/ca-certificates/1451.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14512.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "sudo cat /etc/ssl/certs/14512.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14512.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "sudo cat /usr/share/ca-certificates/14512.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-510000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-510000 ssh "sudo systemctl is-active crio": exit status 1 (121.159708ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-510000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-510000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-510000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2091: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-510000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-510000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-510000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a29b2267-a9b0-4412-9ee1-f26223839cb9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a29b2267-a9b0-4412-9ee1-f26223839cb9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004143625s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-510000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.94.93 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-510000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-510000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-510000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-zl5k5" [2883cf94-374a-429c-a1be-27a83b001bce] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-zl5k5" [2883cf94-374a-429c-a1be-27a83b001bce] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.012081542s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 service list -o json
functional_test.go:1494: Took "296.789917ms" to run "out/minikube-darwin-arm64 -p functional-510000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31791
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31791
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "93.838292ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.653709ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "86.4655ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.569958ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-510000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1486206501/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726507342239839000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1486206501/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726507342239839000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1486206501/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726507342239839000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1486206501/001/test-1726507342239839000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-510000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.231833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 17:22 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 17:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 17:22 test-1726507342239839000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh cat /mount-9p/test-1726507342239839000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-510000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c9bf30c0-873b-4207-980b-9e86d2d1727d] Pending
helpers_test.go:344: "busybox-mount" [c9bf30c0-873b-4207-980b-9e86d2d1727d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c9bf30c0-873b-4207-980b-9e86d2d1727d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c9bf30c0-873b-4207-980b-9e86d2d1727d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.007785s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-510000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-510000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1486206501/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-510000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port797183888/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-510000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.390458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-510000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port797183888/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-510000 ssh "sudo umount -f /mount-9p": exit status 1 (66.302ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-510000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-510000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port797183888/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-510000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3215452008/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-510000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3215452008/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-510000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3215452008/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-510000 ssh "findmnt -T" /mount1: exit status 1 (72.643042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-510000 ssh "findmnt -T" /mount1: exit status 1 (96.436167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-510000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-510000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3215452008/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-510000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3215452008/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-510000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3215452008/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-510000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-510000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-510000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-510000 image ls --format short --alsologtostderr:
I0916 10:22:43.493628    2392 out.go:345] Setting OutFile to fd 1 ...
I0916 10:22:43.493769    2392 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:22:43.493773    2392 out.go:358] Setting ErrFile to fd 2...
I0916 10:22:43.493776    2392 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:22:43.493923    2392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
I0916 10:22:43.494397    2392 config.go:182] Loaded profile config "functional-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:22:43.494467    2392 config.go:182] Loaded profile config "functional-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:22:43.495372    2392 ssh_runner.go:195] Run: systemctl --version
I0916 10:22:43.495381    2392 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/functional-510000/id_rsa Username:docker}
I0916 10:22:43.527368    2392 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-510000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/kicbase/echo-server               | functional-510000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/library/minikube-local-cache-test | functional-510000 | c80f7e4f8f699 | 30B    |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-510000 image ls --format table --alsologtostderr:
I0916 10:22:43.735937    2402 out.go:345] Setting OutFile to fd 1 ...
I0916 10:22:43.736086    2402 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:22:43.736089    2402 out.go:358] Setting ErrFile to fd 2...
I0916 10:22:43.736091    2402 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:22:43.736233    2402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
I0916 10:22:43.736648    2402 config.go:182] Loaded profile config "functional-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:22:43.736715    2402 config.go:182] Loaded profile config "functional-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:22:43.737619    2402 ssh_runner.go:195] Run: systemctl --version
I0916 10:22:43.737631    2402 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/functional-510000/id_rsa Username:docker}
I0916 10:22:43.768695    2402 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-510000 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbb
e6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"c80f7e4f8f699db254f4ccfe5cced4abb2dce6e10c5d8a8f55179922428cb572","repoDigests":[],"repoTags":["docke
r.io/library/minikube-local-cache-test:functional-510000"],"size":"30"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-510000"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"si
ze":"60200000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-510000 image ls --format json --alsologtostderr:
I0916 10:22:43.659571    2397 out.go:345] Setting OutFile to fd 1 ...
I0916 10:22:43.659699    2397 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:22:43.659702    2397 out.go:358] Setting ErrFile to fd 2...
I0916 10:22:43.659705    2397 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:22:43.659841    2397 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
I0916 10:22:43.660249    2397 config.go:182] Loaded profile config "functional-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:22:43.660319    2397 config.go:182] Loaded profile config "functional-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:22:43.661146    2397 ssh_runner.go:195] Run: systemctl --version
I0916 10:22:43.661153    2397 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/functional-510000/id_rsa Username:docker}
I0916 10:22:43.691207    2397 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-510000 image ls --format yaml --alsologtostderr:
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: c80f7e4f8f699db254f4ccfe5cced4abb2dce6e10c5d8a8f55179922428cb572
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-510000
size: "30"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-510000
size: "4780000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-510000 image ls --format yaml --alsologtostderr:
I0916 10:22:43.585893    2394 out.go:345] Setting OutFile to fd 1 ...
I0916 10:22:43.586078    2394 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:22:43.586082    2394 out.go:358] Setting ErrFile to fd 2...
I0916 10:22:43.586084    2394 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:22:43.586200    2394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
I0916 10:22:43.586599    2394 config.go:182] Loaded profile config "functional-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:22:43.586658    2394 config.go:182] Loaded profile config "functional-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:22:43.587465    2394 ssh_runner.go:195] Run: systemctl --version
I0916 10:22:43.587473    2394 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/functional-510000/id_rsa Username:docker}
I0916 10:22:43.615566    2394 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-510000 ssh pgrep buildkitd: exit status 1 (63.653709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image build -t localhost/my-image:functional-510000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-510000 image build -t localhost/my-image:functional-510000 testdata/build --alsologtostderr: (1.808261792s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-510000 image build -t localhost/my-image:functional-510000 testdata/build --alsologtostderr:
I0916 10:22:43.700130    2400 out.go:345] Setting OutFile to fd 1 ...
I0916 10:22:43.700352    2400 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:22:43.700357    2400 out.go:358] Setting ErrFile to fd 2...
I0916 10:22:43.700360    2400 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:22:43.700483    2400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19649-964/.minikube/bin
I0916 10:22:43.700922    2400 config.go:182] Loaded profile config "functional-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:22:43.701672    2400 config.go:182] Loaded profile config "functional-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 10:22:43.702489    2400 ssh_runner.go:195] Run: systemctl --version
I0916 10:22:43.702497    2400 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19649-964/.minikube/machines/functional-510000/id_rsa Username:docker}
I0916 10:22:43.731310    2400 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.203445484.tar
I0916 10:22:43.731363    2400 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 10:22:43.736176    2400 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.203445484.tar
I0916 10:22:43.738083    2400 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.203445484.tar: stat -c "%s %y" /var/lib/minikube/build/build.203445484.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.203445484.tar': No such file or directory
I0916 10:22:43.738106    2400 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.203445484.tar --> /var/lib/minikube/build/build.203445484.tar (3072 bytes)
I0916 10:22:43.748768    2400 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.203445484
I0916 10:22:43.752293    2400 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.203445484 -xf /var/lib/minikube/build/build.203445484.tar
I0916 10:22:43.755944    2400 docker.go:360] Building image: /var/lib/minikube/build/build.203445484
I0916 10:22:43.755999    2400 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-510000 /var/lib/minikube/build/build.203445484
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:3c652a22eff2cc754ed7b02b058482dcb97a04a89f57110c99ebc87c4e60fdb7 done
#8 naming to localhost/my-image:functional-510000 done
#8 DONE 0.0s
I0916 10:22:45.408001    2400 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-510000 /var/lib/minikube/build/build.203445484: (1.652024792s)
I0916 10:22:45.408073    2400 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.203445484
I0916 10:22:45.411858    2400 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.203445484.tar
I0916 10:22:45.415315    2400 build_images.go:217] Built localhost/my-image:functional-510000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.203445484.tar
I0916 10:22:45.415330    2400 build_images.go:133] succeeded building to: functional-510000
I0916 10:22:45.415333    2400 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.818940875s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-510000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image load --daemon kicbase/echo-server:functional-510000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image load --daemon kicbase/echo-server:functional-510000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-510000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image load --daemon kicbase/echo-server:functional-510000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image save kicbase/echo-server:functional-510000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image rm kicbase/echo-server:functional-510000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-510000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 image save --daemon kicbase/echo-server:functional-510000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-510000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-510000 docker-env) && out/minikube-darwin-arm64 status -p functional-510000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-510000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 update-context --alsologtostderr -v=2
E0916 10:22:45.118963    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-510000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-510000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-510000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-510000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (175.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-094000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0916 10:23:26.081538    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:24:48.003338    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-094000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m55.171261875s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (175.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-094000 -- rollout status deployment/busybox: (2.982322875s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-7hz9n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-g679m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-zv9c9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-7hz9n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-g679m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-zv9c9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-7hz9n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-g679m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-zv9c9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-7hz9n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-7hz9n -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-g679m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-g679m -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-zv9c9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-094000 -- exec busybox-7dff88458-zv9c9 -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (76.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-094000 -v=7 --alsologtostderr
E0916 10:26:49.891601    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:26:49.898563    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:26:49.911874    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:26:49.935252    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:26:49.977261    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:26:50.060605    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:26:50.222409    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:26:50.544050    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:26:51.187540    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:26:52.470981    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:26:55.034419    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:27:00.156809    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-094000 -v=7 --alsologtostderr: (1m16.720390375s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (76.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-094000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp testdata/cp-test.txt ha-094000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3591659185/001/cp-test_ha-094000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000 "sudo cat /home/docker/cp-test.txt"
E0916 10:27:04.112931    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/addons-138000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000:/home/docker/cp-test.txt ha-094000-m02:/home/docker/cp-test_ha-094000_ha-094000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m02 "sudo cat /home/docker/cp-test_ha-094000_ha-094000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000:/home/docker/cp-test.txt ha-094000-m03:/home/docker/cp-test_ha-094000_ha-094000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m03 "sudo cat /home/docker/cp-test_ha-094000_ha-094000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000:/home/docker/cp-test.txt ha-094000-m04:/home/docker/cp-test_ha-094000_ha-094000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m04 "sudo cat /home/docker/cp-test_ha-094000_ha-094000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp testdata/cp-test.txt ha-094000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3591659185/001/cp-test_ha-094000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000-m02:/home/docker/cp-test.txt ha-094000:/home/docker/cp-test_ha-094000-m02_ha-094000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000 "sudo cat /home/docker/cp-test_ha-094000-m02_ha-094000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000-m02:/home/docker/cp-test.txt ha-094000-m03:/home/docker/cp-test_ha-094000-m02_ha-094000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m03 "sudo cat /home/docker/cp-test_ha-094000-m02_ha-094000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000-m02:/home/docker/cp-test.txt ha-094000-m04:/home/docker/cp-test_ha-094000-m02_ha-094000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m04 "sudo cat /home/docker/cp-test_ha-094000-m02_ha-094000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp testdata/cp-test.txt ha-094000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3591659185/001/cp-test_ha-094000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000-m03:/home/docker/cp-test.txt ha-094000:/home/docker/cp-test_ha-094000-m03_ha-094000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000 "sudo cat /home/docker/cp-test_ha-094000-m03_ha-094000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000-m03:/home/docker/cp-test.txt ha-094000-m02:/home/docker/cp-test_ha-094000-m03_ha-094000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m02 "sudo cat /home/docker/cp-test_ha-094000-m03_ha-094000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000-m03:/home/docker/cp-test.txt ha-094000-m04:/home/docker/cp-test_ha-094000-m03_ha-094000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m04 "sudo cat /home/docker/cp-test_ha-094000-m03_ha-094000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp testdata/cp-test.txt ha-094000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3591659185/001/cp-test_ha-094000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000-m04:/home/docker/cp-test.txt ha-094000:/home/docker/cp-test_ha-094000-m04_ha-094000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000 "sudo cat /home/docker/cp-test_ha-094000-m04_ha-094000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000-m04:/home/docker/cp-test.txt ha-094000-m02:/home/docker/cp-test_ha-094000-m04_ha-094000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m02 "sudo cat /home/docker/cp-test_ha-094000-m04_ha-094000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 cp ha-094000-m04:/home/docker/cp-test.txt ha-094000-m03:/home/docker/cp-test_ha-094000-m04_ha-094000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-094000 ssh -n ha-094000-m03 "sudo cat /home/docker/cp-test_ha-094000-m04_ha-094000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (29.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0916 10:32:17.606969    1451 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19649-964/.minikube/profiles/functional-510000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (29.442188375s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (29.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.57s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-755000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-755000 --output=json --user=testUser: (3.567537917s)
--- PASS: TestJSONOutput/stop/Command (3.57s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-627000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-627000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.892292ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8cdc59f3-71c9-4920-9f94-68a14fe73183","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-627000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a26e7520-f80b-4e63-91e6-37d9e4ce8259","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19649"}}
	{"specversion":"1.0","id":"2560ca2e-4b1d-453a-a664-415302c036df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig"}}
	{"specversion":"1.0","id":"36d463d2-b140-4716-9ccb-db7da3aaecbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"6b3206da-1695-4fd8-b669-17f14c0188f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"33a5bac5-3b27-4a5b-96e4-2ca7b5f1ccb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube"}}
	{"specversion":"1.0","id":"9878f829-a747-4984-9c9e-bd7bf4324231","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ae42558b-6d19-40e3-b359-5e2984b16137","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-627000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-627000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-472000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-472000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.863041ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-472000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19649
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19649-964/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19649-964/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-472000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-472000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.636916ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-472000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-472000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.6544925s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.673321291s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-472000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-472000: (3.750364417s)
--- PASS: TestNoKubernetes/serial/Stop (3.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-472000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-472000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.391667ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-472000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-472000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-385000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-424000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-424000 --alsologtostderr -v=3: (2.063522292s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-424000 -n old-k8s-version-424000: exit status 7 (37.558375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-424000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-117000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-117000 --alsologtostderr -v=3: (3.50071725s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (56.185041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-117000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-663000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-663000 --alsologtostderr -v=3: (2.110722666s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-663000 -n embed-certs-663000: exit status 7 (59.8745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-663000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-665000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-665000 --alsologtostderr -v=3: (3.613828625s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-665000 -n default-k8s-diff-port-665000: exit status 7 (58.526875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-665000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-296000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-296000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-296000 --alsologtostderr -v=3: (1.926383667s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-296000 -n newest-cni-296000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-296000 -n newest-cni-296000: exit status 7 (55.666667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-296000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-900000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-900000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-900000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-900000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-900000"

                                                
                                                
----------------------- debugLogs end: cilium-900000 [took: 2.1820025s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-900000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-900000
--- SKIP: TestNetworkPlugins/group/cilium (2.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-793000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-793000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard